I know I prefer to exist now. I’d also like to survive for a very long time, indefinitely. I’m also not even sure the person I’ll be 10 or 20 years from now will still be significantly “me”. I’m not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I’d prefer not to suffer, but over that, there’s a certain amount of suffering I’m ready to endure if I have to in order to stay alive.
Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already? Also, what if we created people who could suffer, but who’d be happy with it? Would such a life be worthy? Is the fact that suffering is bad something universal, or a quirk of terran animals neurology? Pain is both sensory information and the way this information is used by our brain. Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.
Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already?
From the point of view of those who’ll actually create the minds, it’s not a choice between somebody who exists already and a new mind. It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they’ll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole “this mind has existed once, so it should be given priority over a one that hasn’t” argument doesn’t make a lot of sense.
Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.
Yes. See also David Pearce’s notion of beings who’ve replaced pain and pleasure with gradients of pleasure—instead of having suffering as a feedback mechanism, their feedback mechanism is a lack of pleasure.
Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already?
From the point of view of those who’ll actually create the minds, it’s not a choice between somebody who exists already and a new mind. It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
I’m proposing to create these minds, if I survive. Many will want this. If we have FAI, it will help me, by its definition.
I would rather live in a future afterlife that has my grandparents in it than your ‘better designs’. Better by whose evaluation? I’d also say that my sense of ‘better’ outweighs any other sense of ‘better’ - my terminal values are my own.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe
I could care less about some corner of the universe that is not casually connected to my corner. The big world stuff isn’t very relevant: this is a decision between two versions of our local future: one with people we love in it, and one without.
Those who will actually create the minds will want to rescue people in the past, so they can reasonably anticipate being rescued themselves. Or differently put, those who create the minds will want the right answer to “should I rescue people or create new people” to be “rescue people”.
There’s a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history. I suspect the latter are enough more interesting to be created first. We might move on to creating the populations of interesting alternate histories, as well as randomly selected worlds and so forth down the line.
Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication. Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it’s hard to say how common they would be throughout the universe—thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.
There’s a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history.
What difference is that?
Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication.
I don’t understand what you mean by “only a duplication”.
Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it’s hard to say how common they would be throughout the universe—thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.
This doesn’t make any sense to me.
Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child’s well-being?
There’s a causal connection in one case that is absent in the other, and a correspondingly higher distribution in the pasts of similar worlds.
I don’t understand what you mean by “only a duplication”.
Duplication of effort as well as effect with respect to other parts of the universe. Meaning you are increasing the numbers of immortals and not granting continued life to those who would otherwise be deprived of it.
Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child’s well-being?
We aren’t talking about the creation of random new lives as a matter of reproduction, we’re talking about the resurrection of people who have lived substantial lives already as part of the universe’s natural existence. If you want to resurrect the most people (out of those who have actually existed and died) in order to grant them some redress against death, you are going to have to recreate people who, for physically plausible reasons, would have actually died.
It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they’ll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole “this mind has existed once, so it should be given priority over a one that hasn’t” argument doesn’t make a lot of sense.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
Minds that existed once, and were causally connected to our world in certain ways, should be given priority over minds that have only existed in distant, causally unconnected parts of the Big World.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
One could of course make arguments relating to acausal trade, or suggest that we should try to satisfy even the preferences of beings who never found out about it. But to do that, we would have to know something about the distribution of preferences in the universe. And there our uncertainty is so immense that it’s better to just focus on the preferences of the humans here on Earth.
But in any case, these kinds of considerations don’t seem relevant for the “if we create new minds, should they be similar to minds that have already once existed” question. It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe. Rather, our part of the universe contains information that can be used for creating a mind that resembles an earlier mind, and it also contains information that can be used for creating a more novel mind. When the decision is made, both minds are still non-existent in our part of the universe, and existent in some other.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
I assumed that the rest of what I wrote made it clear why I thought it was clearly the better choice.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions
If that was the reason then people would feel the same about causally connected entities who can’t find out about our decisions. But they don’t. People generally consider it bad to spread rumors about people, even if they never find out. We also consider it immoral to ruin the reputation of dead people, even though we can’t find out.
I think a better explanation for this intuition is simply that we have a bedrock moral principle to discount dissatisfied preferences unless they are about a person’s own life. Parfit argues similarly here.
This principle also explains other intuitive reactions people have. For instance, in this problem given by Stephen Landsburg, people tend to think the rape victim has been harmed, but that McCrankypants and McMustardseed haven’t been. This can be explained if we consider that the preference the victim had was about her life, whereas the preference of the other two wasn’t.
Just as we discount preference violations on a personal level that aren’t about someone’s own life, so we can discount the existence of distant populations that do not impact the one we are a part of.
and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
Just because someone never discovers their preference isn’t satisfied, doesn’t make it any less unsatisfied. Preferences are about desiring one world state over another, not about perception. If someone makes the world different then the way you want it to be then your preference is unsatisfied, even if you never find out.
Of course, as I said before, if said preference is not about one’s own life in some way we can probably discount it.
It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe.
Yes it does, if you think four-dimensionally. The mind we’re seeking to recreate exists in our universe’s past, whereas the novel mind does not.
People sometimes take actions because a dead friend or relative would have wanted them to. We also take action to satisfy the preferences of people who are certain to exist in the future. This indicates that we do indeed continue to value preferences that aren’t in existence at this very moment.
It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can’t manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)
I realize that this probably won’t be very useful advice for you, but I’d recommend working on letting go of the sense of having a lasting self in the first place. Not that I’d fully alieve in that yet either, but the closer that I’ve gotten to always alieving it, the less I’ve felt like I have reason to worry about (not) living forever. Me possibly dying in fourty years is no big deal if I don’t even think I’m the same person tomorrow, or five minutes from now.
Me possibly dying in fourty years is no big deal if I don’t even think I’m the same person tomorrow, or five minutes from now.
You’re confusing two meaning of the word “the same.” When we refer to a person as “the same” that doesn’t mean they haven’t changed, it means that they’ve changed in some ways, but not in others.
If you define “same” as “totally unchanging” then I don’t want to be the same person five minutes from now. Being frozen in time forever so I’d never change would be tantamount to death. There are some ways I want to change, like acquiring new skills and memories.
But there are other ways I don’t want to change. I want my values to stay the same, and I want to remember my life. If I change in that way this is bad. It doesn’t matter if this is done in an abrupt way, like dying, or a slow way, like an FAI gradually turning me into a different person.
If people change in undesirable ways, then it is a good thing to restore them through resurrection. I want to be resurrected if I need to be. And I want you to be resurrected to. Because the parts of you that shouldn’t change are valuable, even if you’ve convinced yourself they’re not.
You’re confusing two meaning of the word “the same.” When we refer to a person as “the same” that doesn’t mean they haven’t changed, it means that they’ve changed in some ways, but not in others.
Sure, I’m aware of that. But the bit that you quoted didn’t make claims about what “the same” means in any objective sense—it only said that if you choose your definition of “the same” appropriately, then you can stop worrying about your long-term survival and thus feel better. (At least that’s how it worked for me: I used to worry about my long-term survival a lot more when I still found personal identity to be a meaningful concept.)
I’ve pondered this some, and it seems that the best strategy in distant historical eras was just to be famous, and more specifically to write an autobiography. Having successful ancestors also seems to grow in importance as we get into the modern era. For us today we have cryonics of course, and being succesful/famous/wealthy is obviously viable, but blogging is probably to be recommended as well.
I know I prefer to exist now. I’d also like to survive for a very long time, indefinitely. I’m also not even sure the person I’ll be 10 or 20 years from now will still be significantly “me”. I’m not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I’d prefer not to suffer, but over that, there’s a certain amount of suffering I’m ready to endure if I have to in order to stay alive.
Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already? Also, what if we created people who could suffer, but who’d be happy with it? Would such a life be worthy? Is the fact that suffering is bad something universal, or a quirk of terran animals neurology? Pain is both sensory information and the way this information is used by our brain. Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.
From the point of view of those who’ll actually create the minds, it’s not a choice between somebody who exists already and a new mind. It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they’ll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole “this mind has existed once, so it should be given priority over a one that hasn’t” argument doesn’t make a lot of sense.
Yes. See also David Pearce’s notion of beings who’ve replaced pain and pleasure with gradients of pleasure—instead of having suffering as a feedback mechanism, their feedback mechanism is a lack of pleasure.
I’m proposing to create these minds, if I survive. Many will want this. If we have FAI, it will help me, by its definition.
I would rather live in a future afterlife that has my grandparents in it than your ‘better designs’. Better by whose evaluation? I’d also say that my sense of ‘better’ outweighs any other sense of ‘better’ - my terminal values are my own.
I could care less about some corner of the universe that is not casually connected to my corner. The big world stuff isn’t very relevant: this is a decision between two versions of our local future: one with people we love in it, and one without.
Those who will actually create the minds will want to rescue people in the past, so they can reasonably anticipate being rescued themselves. Or differently put, those who create the minds will want the right answer to “should I rescue people or create new people” to be “rescue people”.
There’s a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history. I suspect the latter are enough more interesting to be created first. We might move on to creating the populations of interesting alternate histories, as well as randomly selected worlds and so forth down the line.
Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication. Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it’s hard to say how common they would be throughout the universe—thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.
What difference is that?
I don’t understand what you mean by “only a duplication”.
This doesn’t make any sense to me.
Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child’s well-being?
There’s a causal connection in one case that is absent in the other, and a correspondingly higher distribution in the pasts of similar worlds.
Duplication of effort as well as effect with respect to other parts of the universe. Meaning you are increasing the numbers of immortals and not granting continued life to those who would otherwise be deprived of it.
We aren’t talking about the creation of random new lives as a matter of reproduction, we’re talking about the resurrection of people who have lived substantial lives already as part of the universe’s natural existence. If you want to resurrect the most people (out of those who have actually existed and died) in order to grant them some redress against death, you are going to have to recreate people who, for physically plausible reasons, would have actually died.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
Minds that existed once, and were causally connected to our world in certain ways, should be given priority over minds that have only existed in distant, causally unconnected parts of the Big World.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
One could of course make arguments relating to acausal trade, or suggest that we should try to satisfy even the preferences of beings who never found out about it. But to do that, we would have to know something about the distribution of preferences in the universe. And there our uncertainty is so immense that it’s better to just focus on the preferences of the humans here on Earth.
But in any case, these kinds of considerations don’t seem relevant for the “if we create new minds, should they be similar to minds that have already once existed” question. It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe. Rather, our part of the universe contains information that can be used for creating a mind that resembles an earlier mind, and it also contains information that can be used for creating a more novel mind. When the decision is made, both minds are still non-existent in our part of the universe, and existent in some other.
I assumed that the rest of what I wrote made it clear why I thought it was clearly the better choice.
If that was the reason then people would feel the same about causally connected entities who can’t find out about our decisions. But they don’t. People generally consider it bad to spread rumors about people, even if they never find out. We also consider it immoral to ruin the reputation of dead people, even though we can’t find out.
I think a better explanation for this intuition is simply that we have a bedrock moral principle to discount dissatisfied preferences unless they are about a person’s own life. Parfit argues similarly here.
This principle also explains other intuitive reactions people have. For instance, in this problem given by Stephen Landsburg, people tend to think the rape victim has been harmed, but that McCrankypants and McMustardseed haven’t been. This can be explained if we consider that the preference the victim had was about her life, whereas the preference of the other two wasn’t.
Just as we discount preference violations on a personal level that aren’t about someone’s own life, so we can discount the existence of distant populations that do not impact the one we are a part of.
Just because someone never discovers their preference isn’t satisfied, doesn’t make it any less unsatisfied. Preferences are about desiring one world state over another, not about perception. If someone makes the world different then the way you want it to be then your preference is unsatisfied, even if you never find out.
Of course, as I said before, if said preference is not about one’s own life in some way we can probably discount it.
Yes it does, if you think four-dimensionally. The mind we’re seeking to recreate exists in our universe’s past, whereas the novel mind does not.
People sometimes take actions because a dead friend or relative would have wanted them to. We also take action to satisfy the preferences of people who are certain to exist in the future. This indicates that we do indeed continue to value preferences that aren’t in existence at this very moment.
Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can’t manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)
I realize that this probably won’t be very useful advice for you, but I’d recommend working on letting go of the sense of having a lasting self in the first place. Not that I’d fully alieve in that yet either, but the closer that I’ve gotten to always alieving it, the less I’ve felt like I have reason to worry about (not) living forever. Me possibly dying in fourty years is no big deal if I don’t even think I’m the same person tomorrow, or five minutes from now.
You’re confusing two meaning of the word “the same.” When we refer to a person as “the same” that doesn’t mean they haven’t changed, it means that they’ve changed in some ways, but not in others.
If you define “same” as “totally unchanging” then I don’t want to be the same person five minutes from now. Being frozen in time forever so I’d never change would be tantamount to death. There are some ways I want to change, like acquiring new skills and memories.
But there are other ways I don’t want to change. I want my values to stay the same, and I want to remember my life. If I change in that way this is bad. It doesn’t matter if this is done in an abrupt way, like dying, or a slow way, like an FAI gradually turning me into a different person.
If people change in undesirable ways, then it is a good thing to restore them through resurrection. I want to be resurrected if I need to be. And I want you to be resurrected to. Because the parts of you that shouldn’t change are valuable, even if you’ve convinced yourself they’re not.
Sure, I’m aware of that. But the bit that you quoted didn’t make claims about what “the same” means in any objective sense—it only said that if you choose your definition of “the same” appropriately, then you can stop worrying about your long-term survival and thus feel better. (At least that’s how it worked for me: I used to worry about my long-term survival a lot more when I still found personal identity to be a meaningful concept.)
I’ve pondered this some, and it seems that the best strategy in distant historical eras was just to be famous, and more specifically to write an autobiography. Having successful ancestors also seems to grow in importance as we get into the modern era. For us today we have cryonics of course, and being succesful/famous/wealthy is obviously viable, but blogging is probably to be recommended as well.