Restricting attention to humans means assuming that utility monsters can’t exist. Most people who are interested in the utility monster problem won’t accept such a strong assumption without justification.
B) consider, then, successively more and more severely mentally nonfunctioning humans. There is some level of incapability at which we stop caring (e.g. head crushed), and I would be somewhat surprised at a choice of values that put a 100% abrupt turn-on at some threshold; and if it did, I expect some human could be found or made that would flicker across that boundary regularly.
There is some level of incapability at which we stop caring (e.g. head crushed), … I expect some human could be found or made that would flicker across that boundary regularly.
This is wrong, at least for typical humans such as myself. In other words, we do not stop caring about the one with the crushed head just because they are on the wrong side of a boundary, but because we have no way to bring them back across that boundary. If we had a way to bring them back, we would care. So if someone is flickering back and forth across the so-called boundary, we will still care about them, since by stipulation they can come back.
I don’t think this is a good illustration, at least for me, since I would never stop caring about someone as long as it was clear that they were biologically human, and not brain dead.
I think a better illustration would be this: take your historical ancestors one by one. If you go back far enough in time, one of them will be a fish, which we would at least not care about in any human way. But in that way I agree with what you said about values. We will care less and less in a gradual way as we go back—there will not be any boundary where we suddenly stop caring.
Our system considers only humans; another sapient alien race may implement this system, and consider only themselves.
Restricting attention to humans means assuming that utility monsters can’t exist. Most people who are interested in the utility monster problem won’t accept such a strong assumption without justification.
A) what cousin_it said.
B) consider, then, successively more and more severely mentally nonfunctioning humans. There is some level of incapability at which we stop caring (e.g. head crushed), and I would be somewhat surprised at a choice of values that put a 100% abrupt turn-on at some threshold; and if it did, I expect some human could be found or made that would flicker across that boundary regularly.
This is wrong, at least for typical humans such as myself. In other words, we do not stop caring about the one with the crushed head just because they are on the wrong side of a boundary, but because we have no way to bring them back across that boundary. If we had a way to bring them back, we would care. So if someone is flickering back and forth across the so-called boundary, we will still care about them, since by stipulation they can come back.
Good point; how about, someone who is stupider than the average dog.
I don’t think this is a good illustration, at least for me, since I would never stop caring about someone as long as it was clear that they were biologically human, and not brain dead.
I think a better illustration would be this: take your historical ancestors one by one. If you go back far enough in time, one of them will be a fish, which we would at least not care about in any human way. But in that way I agree with what you said about values. We will care less and less in a gradual way as we go back—there will not be any boundary where we suddenly stop caring.