Someone who is currently altruistic towards humanity should
Wei, the question here is would rather than should, no? It’s quite possible that the altruism that I endorse as a part of me is related to my brain’s empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang’s “Understand”—http://www.infinityplus.co.uk/stories/under.htm and, ahem, Watchmen’s Dr. Manhattan.
Logical fallacy: Generalization from fictional evidence.
A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.
If you start doing code modification, of course, some but not all bets are off.
Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course.
I agree on the first-minute point, but do not see why it’s relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I’d make value preservation my first order of business, but since an upload is still evolution’s spaghetti code it might be a race against time.
their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.
I don’t see why this is necessarily true, unless you treat “altruism toward humanity” as a terminal goal.
When I was a very young child, I greatly valued my brightly colored alphabet blocks; but today, I pretty much ignore them. My mind had developed to the point where I can fully visualize all the interesting permutations of the blocks in my head, should I need to do so for some reason.
I don’t see why this is necessarily true, unless you treat “altruism toward humanity” as a terminal goal.
Well, yes. I think that’s the point. I certainly don’t only value other humans for the way that they interest me—If that were so, I probably wouldn’t care about most of them at all. Humanity is a terminall value to me—or, more generally, the existence and experiences of happy, engaged, thinking sentient beings. Humans qualify, regardless of whether or not uploads exist (and, of course, also qualify.
How do you know that “the existence and experiences of happy, engaged, thinking sentient beings” is indeed one of your terminal values, and not an instrumental value ?
Wei, the question here is would rather than should, no? It’s quite possible that the altruism that I endorse as a part of me is related to my brain’s empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang’s “Understand”—http://www.infinityplus.co.uk/stories/under.htm and, ahem, Watchmen’s Dr. Manhattan.
Logical fallacy: Generalization from fictional evidence.
A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.
If you start doing code modification, of course, some but not all bets are off.
Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course.
I agree on the first-minute point, but do not see why it’s relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I’d make value preservation my first order of business, but since an upload is still evolution’s spaghetti code it might be a race against time.
Perhaps the idea is that the sensory experience of no longer falling into the category of “human” would cause the brain to behave in unexpected ways?
I don’t find that especially likely, mind, although I suppose long-term there might arise a self-serving “em supremacy” meme.
I don’t see why this is necessarily true, unless you treat “altruism toward humanity” as a terminal goal.
When I was a very young child, I greatly valued my brightly colored alphabet blocks; but today, I pretty much ignore them. My mind had developed to the point where I can fully visualize all the interesting permutations of the blocks in my head, should I need to do so for some reason.
Well, yes. I think that’s the point. I certainly don’t only value other humans for the way that they interest me—If that were so, I probably wouldn’t care about most of them at all. Humanity is a terminall value to me—or, more generally, the existence and experiences of happy, engaged, thinking sentient beings. Humans qualify, regardless of whether or not uploads exist (and, of course, also qualify.
How do you know that “the existence and experiences of happy, engaged, thinking sentient beings” is indeed one of your terminal values, and not an instrumental value ?
+1 for linking to Understand ; I remembered reading the story long ago, but I forgot the link. Thanks for reminding me !