This question is no good. Would you choose to untranslatable-1 or untranslatable-2? I very much doubt that reliable understanding of this can be reached using human-level philosophy.
I think it is clear what a copy of you in its own world is. Just copy, atom-for-atom, everything in the solar system, and put the whole thing in another part of the universe such that it cannot interact with the original you. If copying the other people bothers you, just consider the value of the copy of you itself, ignoring the value or disvalue of the other copies.
It’s clear what the situations you talk about are, but these are not the kind of situation your brainware evolved to morally estimate. (This is not the case of a situation too difficult to understand, nor is it a case of a situation involving opposing moral pressures.) The “untranslatable” metaphor was intended to be a step further than you interpreted (which is more clearly explained in my second comment).
Will see. I just have very little hope for progress to be made on this particular dead horse. I offered some ideas about how it could turn out that on human level progress can’t in principle be made on this question (and some similar ones).
Can you call this particular issue a ‘dead horse’ when it hasn’t been a common subject of argument before? (I mean, most of the relevant conversations in human history hadn’t gone past the sophomoric question of whether a copy of you is really you.)
If you’re going to be pessimistic on the prospect of discussion, I think you’d at very least need a new idiom, like “Don’t start beating a stillborn horse”.
If you’re going to be pessimistic on the prospect of discussion, I think you’d at very least need a new idiom, like “Don’t start beating a stillborn horse”.
This is a question about moral estimation. Simple questions of moral estimation can be resolved by observing reactions of people to situations which they evolved to consider: to save vs. to eat a human baby, for example. For more difficult questions involving unusual or complicated situations, or situations involving contradicting moral pressures, we simply don’t have any means for extraction of information about their moral value. The only experimental apparatus we have are human reactions, and this apparatus has only so much resolution. Quality of theoretical analysis of observations made using this tool is also rather poor.
To move forward, we need better tools, and better theory. Both could be obtained by improving humans, by making smarter humans that can consider more detailed situations and perform moral reasoning about them. This is not the best option, since we risk creating “improved” humans that have slightly different preferences, and so moral observations obtained using the “improved” humans will be about their preference and not ours. Nonetheless, for some general questions, such as the value of copies, I expect that the answers given by such instruments would also be true about out own preference.
Another way is of course to just create a FAI, which will necessarily be able to do moral estimation of arbitrary situations.
This question is no good. Would you choose to untranslatable-1 or untranslatable-2? I very much doubt that reliable understanding of this can be reached using human-level philosophy.
I think it is clear what a copy of you in its own world is. Just copy, atom-for-atom, everything in the solar system, and put the whole thing in another part of the universe such that it cannot interact with the original you. If copying the other people bothers you, just consider the value of the copy of you itself, ignoring the value or disvalue of the other copies.
It’s clear what the situations you talk about are, but these are not the kind of situation your brainware evolved to morally estimate. (This is not the case of a situation too difficult to understand, nor is it a case of a situation involving opposing moral pressures.) The “untranslatable” metaphor was intended to be a step further than you interpreted (which is more clearly explained in my second comment).
oh ok. But the point of this post and the followup is to try to make inroads into morally estimating this, so I guess wait until the sequel.
Roko, have you seen my post The Moral Status of Independent Identical Copies? There are also some links in the comments of that post to earlier discussions.
Will see. I just have very little hope for progress to be made on this particular dead horse. I offered some ideas about how it could turn out that on human level progress can’t in principle be made on this question (and some similar ones).
Can you call this particular issue a ‘dead horse’ when it hasn’t been a common subject of argument before? (I mean, most of the relevant conversations in human history hadn’t gone past the sophomoric question of whether a copy of you is really you.)
If you’re going to be pessimistic on the prospect of discussion, I think you’d at very least need a new idiom, like “Don’t start beating a stillborn horse”.
I like the analogy!
What kind of philosophy do we need, then?
This is a question about moral estimation. Simple questions of moral estimation can be resolved by observing reactions of people to situations which they evolved to consider: to save vs. to eat a human baby, for example. For more difficult questions involving unusual or complicated situations, or situations involving contradicting moral pressures, we simply don’t have any means for extraction of information about their moral value. The only experimental apparatus we have are human reactions, and this apparatus has only so much resolution. Quality of theoretical analysis of observations made using this tool is also rather poor.
To move forward, we need better tools, and better theory. Both could be obtained by improving humans, by making smarter humans that can consider more detailed situations and perform moral reasoning about them. This is not the best option, since we risk creating “improved” humans that have slightly different preferences, and so moral observations obtained using the “improved” humans will be about their preference and not ours. Nonetheless, for some general questions, such as the value of copies, I expect that the answers given by such instruments would also be true about out own preference.
Another way is of course to just create a FAI, which will necessarily be able to do moral estimation of arbitrary situations.