Would “The explicit personality modification would just affect your fear of death” be an acceptable formulation?
In which case I think my original lobotomy-comparison would be relevant again. If I’d still be “allowed” to update my beliefs, there would be landslide changes originating from the “fear of death removed” change, whether we refer to just the emotion or to our drive of avoiding death. In both of which the former “I” would cease to exist, via lobotomy or via unrecognizable personality.
If someone transplanted a belief of “I can actually fly” into your personality, and an observer compared your actions prior to that belief, and following that belief, don’t you think the differences would be so profound as to belie any kind of “personality is pretty much unchanged, other than that” statement?
If you actually believe you can fly, you would simply jump out of a window (at least if noone’s looking). After all, you can fly!
Same with the fear of death.
(You could argue that just the emotion gets removed, not your reasoning about death. In which case you would know that dying would forever remove you from your loved ones. However, while you don’t fear death (by fiat), you would fear losing your loved ones—“care about the same things you care about now”—which would necessarily drive you to avoid dying out of fear. So a scenario in which only your death-related emotions are affected, not your reasoning apparatus, wouldn’t change much of anything:
Fear of death as it is is often conferred fear from other things. If you were, however, to disallow any fear (of e.g. losing your loved ones) from also applying to your own death, your emotional makeup wouldn’t only be recognizable as (your former) “you”, it might barely be recognizable as human.
What would it mean not to fear death but to know all the things, future activities, encounters lost to you if you die? How would you be allowed any feelings about anything period, lest they also transfer to death? A whole lot of reprogramming would be required just to make the “remove fear of death” scenario coherent, even as a hypothetical. I must be missing something.)
Do I understand you correctly that a sapient being with a knowingly finite lifespan must necessarily fear death or (accidentally) suicide? If so, that’s a pretty sweeping statement and I can think of several counterexamples.
I don’t see how it would be possible to feel emotions, attachments specifically, to anything without those also translating to emotions regarding death.
Any scheme that prohibits fear in relation to death would either significantly mess with the intricate web of other emotions, modifying e.g. “I originally wanted to be with my loved ones, but I don’t want to be with them in the future, since I may be dead then” (or you’ll be fearful of not seeing your loved ones again because of dying, which gets conspicuously close to “fear of death”), or just ignore death altogether, to avoid doing the updates that are contingent on death. So it comes close to the statement you attribute to me.
Think about it like a Bayesian belief propagation graph. If you propagate the change, the overall changes would be huge. The only way to avoid them is to cut out the node and pretend it’s still there, like beeping out a name whenever it comes out. However, that would lead to the failure mode of what happens when you run across that node while coming to a decision, eventually accidental suicide (it’s a rather important node in your day-to-day life).
(As an aside, I remember arguments here on LW that AIXI would accidentally suicide, don’t remember the details, unfortunately.)
It’s possible that given sufficient reflection my position would change, but that’s possible with most any of my beliefs. I could probably find counterarguments to a strict dichotomy of “p-zombie or accidental suicide”, and there may be clever ways of stopping the lack of fear of death from propagating while still implementing some “automatic avoid death” reflex, similar to your ankle jerk reflex.
I don’t however see how the change could be identity-preserving, other than when the threshold for that is chosen to be sufficiently lax. I’d consider myself a different person, which is why I thought of the lobotomy comparison. (Of course I could be artificially made to feel like the same person, but my present self would still object to that, same as to the endless heroin drip. Gandhi wouldn’t take the pill turning him into a mindless killer even if you told him that the killer-Gandhi would think he’d always been that way.)
Would “The explicit personality modification would just affect your fear of death” be an acceptable formulation?
In which case I think my original lobotomy-comparison would be relevant again. If I’d still be “allowed” to update my beliefs, there would be landslide changes originating from the “fear of death removed” change, whether we refer to just the emotion or to our drive of avoiding death. In both of which the former “I” would cease to exist, via lobotomy or via unrecognizable personality.
If someone transplanted a belief of “I can actually fly” into your personality, and an observer compared your actions prior to that belief, and following that belief, don’t you think the differences would be so profound as to belie any kind of “personality is pretty much unchanged, other than that” statement?
If you actually believe you can fly, you would simply jump out of a window (at least if noone’s looking). After all, you can fly!
Same with the fear of death.
(You could argue that just the emotion gets removed, not your reasoning about death. In which case you would know that dying would forever remove you from your loved ones. However, while you don’t fear death (by fiat), you would fear losing your loved ones—“care about the same things you care about now”—which would necessarily drive you to avoid dying out of fear. So a scenario in which only your death-related emotions are affected, not your reasoning apparatus, wouldn’t change much of anything:
Fear of death as it is is often conferred fear from other things. If you were, however, to disallow any fear (of e.g. losing your loved ones) from also applying to your own death, your emotional makeup wouldn’t only be recognizable as (your former) “you”, it might barely be recognizable as human.
What would it mean not to fear death but to know all the things, future activities, encounters lost to you if you die? How would you be allowed any feelings about anything period, lest they also transfer to death? A whole lot of reprogramming would be required just to make the “remove fear of death” scenario coherent, even as a hypothetical. I must be missing something.)
Do I understand you correctly that a sapient being with a knowingly finite lifespan must necessarily fear death or (accidentally) suicide? If so, that’s a pretty sweeping statement and I can think of several counterexamples.
I don’t see how it would be possible to feel emotions, attachments specifically, to anything without those also translating to emotions regarding death.
Any scheme that prohibits fear in relation to death would either significantly mess with the intricate web of other emotions, modifying e.g. “I originally wanted to be with my loved ones, but I don’t want to be with them in the future, since I may be dead then” (or you’ll be fearful of not seeing your loved ones again because of dying, which gets conspicuously close to “fear of death”), or just ignore death altogether, to avoid doing the updates that are contingent on death. So it comes close to the statement you attribute to me.
Think about it like a Bayesian belief propagation graph. If you propagate the change, the overall changes would be huge. The only way to avoid them is to cut out the node and pretend it’s still there, like beeping out a name whenever it comes out. However, that would lead to the failure mode of what happens when you run across that node while coming to a decision, eventually accidental suicide (it’s a rather important node in your day-to-day life).
(As an aside, I remember arguments here on LW that AIXI would accidentally suicide, don’t remember the details, unfortunately.)
Just wondering, suppose someone (say, at a meetup) offered you $100 to come up with a counterargument you would find convincing, would you be able to?
It’s possible that given sufficient reflection my position would change, but that’s possible with most any of my beliefs. I could probably find counterarguments to a strict dichotomy of “p-zombie or accidental suicide”, and there may be clever ways of stopping the lack of fear of death from propagating while still implementing some “automatic avoid death” reflex, similar to your ankle jerk reflex.
I don’t however see how the change could be identity-preserving, other than when the threshold for that is chosen to be sufficiently lax. I’d consider myself a different person, which is why I thought of the lobotomy comparison. (Of course I could be artificially made to feel like the same person, but my present self would still object to that, same as to the endless heroin drip. Gandhi wouldn’t take the pill turning him into a mindless killer even if you told him that the killer-Gandhi would think he’d always been that way.)
Why do you ask, if I may ask?