If death is horrible then I should fight death, not fight my own grief.
Yes, but how do you tell whether death is horrible or not?
In other words, how is one supposed to know which of the following is true?
Preventing death is the real terminal value. Grief is a rational feeling that has instrumental value (for motivating oneself to fight death).
Avoiding grief is the real terminal value. Preventing death is just a subgoal of avoiding grief (and one should fight grief directly if that’s easier/more effective).
First I was like, “wow, good question!” Then I was like, “ooh, that one’s easy”. In world A people are just like us except they don’t die. In world B people are just like us except they don’t feel grief about dying. RIght now, do you prefer our world to evolve into world A or world B? As far as I can tell, this is the general procedure for distinguishing your actual values from wireheading.
That’s not really a fair comparison, is it? There is no reason to choose world B since in world A nobody feels grief about dying either (since nobody dies).
To make it fair, I think we need to change world A so that nobody dies, but everyone feels grief at random intervals, so that the average amount of grief is the same as in our world. Then it’s not clear to me which world I should prefer...
You’re right, I didn’t think about that. However, if avoiding grief were a terminal value but avoiding death weren’t, you’d be indifferent between world A and world B (in my original formulation). Are you?
I do prefer world A to world B in your original formulation. Unfortunately, from that fact we can only deduce that I’m not certain that avoiding death isn’t a terminal value. But I already knew that...
If there is a fact of the matter on whether avoiding death is a terminal value, where does that fact reside? Do you believe your mind contains some additional information for identifying terminal values, but that information is somehow hidden and didn’t stop you from claiming that “you’re not certain”?
World A is clearly better, because not only can people in it not feel grief, but they can do so indefinitely, without death stopping them from not feeling grief.
Both are terminal values to some extent. Where the “consequentialist” evolution had a single (actual) outcome in mind, any instrumental influence on that process had a chance of getting engraved in people’s minds. Godshatter can’t clearly draw the boundaries, assert values applying only to a particular class of situations and not at all to other situations. Any given psychological drive influences moral value of all situations (although of course this influence could be insignificant on some situations and defining on the other). Where we are uncertain, the level of this influence is probably non-trivial.
Yes, but how do you tell whether death is horrible or not?
In other words, how is one supposed to know which of the following is true?
Preventing death is the real terminal value. Grief is a rational feeling that has instrumental value (for motivating oneself to fight death).
Avoiding grief is the real terminal value. Preventing death is just a subgoal of avoiding grief (and one should fight grief directly if that’s easier/more effective).
First I was like, “wow, good question!” Then I was like, “ooh, that one’s easy”. In world A people are just like us except they don’t die. In world B people are just like us except they don’t feel grief about dying. RIght now, do you prefer our world to evolve into world A or world B? As far as I can tell, this is the general procedure for distinguishing your actual values from wireheading.
That’s not really a fair comparison, is it? There is no reason to choose world B since in world A nobody feels grief about dying either (since nobody dies).
To make it fair, I think we need to change world A so that nobody dies, but everyone feels grief at random intervals, so that the average amount of grief is the same as in our world. Then it’s not clear to me which world I should prefer...
You’re right, I didn’t think about that. However, if avoiding grief were a terminal value but avoiding death weren’t, you’d be indifferent between world A and world B (in my original formulation). Are you?
I do prefer world A to world B in your original formulation. Unfortunately, from that fact we can only deduce that I’m not certain that avoiding death isn’t a terminal value. But I already knew that...
If there is a fact of the matter on whether avoiding death is a terminal value, where does that fact reside? Do you believe your mind contains some additional information for identifying terminal values, but that information is somehow hidden and didn’t stop you from claiming that “you’re not certain”?
I’m not Wei_Dai but in the general case that is how facts of the matter work.
World A is clearly better, because not only can people in it not feel grief, but they can do so indefinitely, without death stopping them from not feeling grief.
Both are terminal values to some extent. Where the “consequentialist” evolution had a single (actual) outcome in mind, any instrumental influence on that process had a chance of getting engraved in people’s minds. Godshatter can’t clearly draw the boundaries, assert values applying only to a particular class of situations and not at all to other situations. Any given psychological drive influences moral value of all situations (although of course this influence could be insignificant on some situations and defining on the other). Where we are uncertain, the level of this influence is probably non-trivial.