It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they’ll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole “this mind has existed once, so it should be given priority over a one that hasn’t” argument doesn’t make a lot of sense.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
Minds that existed once, and were causally connected to our world in certain ways, should be given priority over minds that have only existed in distant, causally unconnected parts of the Big World.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
One could of course make arguments relating to acausal trade, or suggest that we should try to satisfy even the preferences of beings who never found out about it. But to do that, we would have to know something about the distribution of preferences in the universe. And there our uncertainty is so immense that it’s better to just focus on the preferences of the humans here on Earth.
But in any case, these kinds of considerations don’t seem relevant for the “if we create new minds, should they be similar to minds that have already once existed” question. It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe. Rather, our part of the universe contains information that can be used for creating a mind that resembles an earlier mind, and it also contains information that can be used for creating a more novel mind. When the decision is made, both minds are still non-existent in our part of the universe, and existent in some other.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
I assumed that the rest of what I wrote made it clear why I thought it was clearly the better choice.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions
If that was the reason then people would feel the same about causally connected entities who can’t find out about our decisions. But they don’t. People generally consider it bad to spread rumors about people, even if they never find out. We also consider it immoral to ruin the reputation of dead people, even though we can’t find out.
I think a better explanation for this intuition is simply that we have a bedrock moral principle to discount dissatisfied preferences unless they are about a person’s own life. Parfit argues similarly here.
This principle also explains other intuitive reactions people have. For instance, in this problem given by Stephen Landsburg, people tend to think the rape victim has been harmed, but that McCrankypants and McMustardseed haven’t been. This can be explained if we consider that the preference the victim had was about her life, whereas the preference of the other two wasn’t.
Just as we discount preference violations on a personal level that aren’t about someone’s own life, so we can discount the existence of distant populations that do not impact the one we are a part of.
and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
Just because someone never discovers their preference isn’t satisfied, doesn’t make it any less unsatisfied. Preferences are about desiring one world state over another, not about perception. If someone makes the world different then the way you want it to be then your preference is unsatisfied, even if you never find out.
Of course, as I said before, if said preference is not about one’s own life in some way we can probably discount it.
It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe.
Yes it does, if you think four-dimensionally. The mind we’re seeking to recreate exists in our universe’s past, whereas the novel mind does not.
People sometimes take actions because a dead friend or relative would have wanted them to. We also take action to satisfy the preferences of people who are certain to exist in the future. This indicates that we do indeed continue to value preferences that aren’t in existence at this very moment.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
Minds that existed once, and were causally connected to our world in certain ways, should be given priority over minds that have only existed in distant, causally unconnected parts of the Big World.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
One could of course make arguments relating to acausal trade, or suggest that we should try to satisfy even the preferences of beings who never found out about it. But to do that, we would have to know something about the distribution of preferences in the universe. And there our uncertainty is so immense that it’s better to just focus on the preferences of the humans here on Earth.
But in any case, these kinds of considerations don’t seem relevant for the “if we create new minds, should they be similar to minds that have already once existed” question. It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe. Rather, our part of the universe contains information that can be used for creating a mind that resembles an earlier mind, and it also contains information that can be used for creating a more novel mind. When the decision is made, both minds are still non-existent in our part of the universe, and existent in some other.
I assumed that the rest of what I wrote made it clear why I thought it was clearly the better choice.
If that was the reason then people would feel the same about causally connected entities who can’t find out about our decisions. But they don’t. People generally consider it bad to spread rumors about people, even if they never find out. We also consider it immoral to ruin the reputation of dead people, even though we can’t find out.
I think a better explanation for this intuition is simply that we have a bedrock moral principle to discount dissatisfied preferences unless they are about a person’s own life. Parfit argues similarly here.
This principle also explains other intuitive reactions people have. For instance, in this problem given by Stephen Landsburg, people tend to think the rape victim has been harmed, but that McCrankypants and McMustardseed haven’t been. This can be explained if we consider that the preference the victim had was about her life, whereas the preference of the other two wasn’t.
Just as we discount preference violations on a personal level that aren’t about someone’s own life, so we can discount the existence of distant populations that do not impact the one we are a part of.
Just because someone never discovers their preference isn’t satisfied, doesn’t make it any less unsatisfied. Preferences are about desiring one world state over another, not about perception. If someone makes the world different then the way you want it to be then your preference is unsatisfied, even if you never find out.
Of course, as I said before, if said preference is not about one’s own life in some way we can probably discount it.
Yes it does, if you think four-dimensionally. The mind we’re seeking to recreate exists in our universe’s past, whereas the novel mind does not.
People sometimes take actions because a dead friend or relative would have wanted them to. We also take action to satisfy the preferences of people who are certain to exist in the future. This indicates that we do indeed continue to value preferences that aren’t in existence at this very moment.