You are right about the weirdness signal, my questions don’t get at this.
As for (3) wouldn’t a yes response imply that you do care about the past and future versions of yourself?
When you write “but I just happen to be a sort of creature that gets upset when the future ‘me’ is threatened and constantly gets overcome with an irresistible urge to work against such threats at the present moment—but this urge doesn’t extend to the post-cryonics “me,” so I’m rationally indifferent in that case.” you seem to be saying your utility function is such that you don’t care about the post-cryonics you and since one can’t claim a utility function is irrational (excluding stuff like Intransitive preferences) this objection to cryonics isn’t irrational.
Perhaps the best way to formulate my argument would be as follows. When someone appears to care about his “normal” future self a few years from now, but not about his future self that might come out of a cryonics revival, you can argue that this is an arbitrary and whimsical preference, since the former “self” doesn’t have any significantly better claim to his identity than the latter. Now let’s set aside any possible counter-arguments to that claim, and for the sake of the argument accept that this is indeed so. I see three possible consequences of accepting it:
Starting to care about one’s post-cryonics future self, and (assuming one’s other concerns are satisfied) signing up for cryonics; this is presumably the intended goal of your argument.
Ceasing to care even about one’s “normal” future selves, and rejecting the very concept of personal identity and continuity. (Presumably leading to either complete resignation or to crazy impulsive behavior.)
Keeping one’s existing preferences and behaviors with the justification that, arbitrary and whimsical as they are, they are not more so than any other options, so you might as well not bother changing them.
Now, the question is: can you argue that (1) is more correct or rational than (2) or (3) in some meaningful way?
(Also, if someone is interested in discussions of this sort, I forgot to mention that I raised similar arguments in another recent thread.)
I can imagine somebody who picks (2) here, but still ends up acting more or less normally. You can take the attitude that the future person commonly identified with you is nobody special but be an altruist who cares about everybody, including that person. And as that person is (at least in the near future, and even in the far future when it comes to long-term decisions like education and life insurance) most susceptible to your (current) influence, you’ll pay still pay more attention to them. In the extreme case, the altruistic disciple of Adam Smith believes that everybody will be best off if each person cares only about the good of the future person commonly identified with them, because of the laws of economics rather than the laws of morality.
But as you say, this runs into (6). I think that with a pefectly altruistic attitude, you’d only fight to survive because you’re worried that this is a homicidal maniac who’s likely to terrorise others, or because you have some responsibilities to others that you can best fulfill. And that doesn’t extend to cryonics. So to take care of extreme altruists, rewrite (6) to specify that you know that your death will lead your attacker to reform and make restitution by living an altruistic life in your stead (but die of overexertion if you fight back).
Bottom line: if one takes consequence (2) of answering No to question (3), question (3) should still be considered solved (not an objection), but (6) still remains to be dealt with.
You are right about the weirdness signal, my questions don’t get at this.
As for (3) wouldn’t a yes response imply that you do care about the past and future versions of yourself?
When you write “but I just happen to be a sort of creature that gets upset when the future ‘me’ is threatened and constantly gets overcome with an irresistible urge to work against such threats at the present moment—but this urge doesn’t extend to the post-cryonics “me,” so I’m rationally indifferent in that case.” you seem to be saying your utility function is such that you don’t care about the post-cryonics you and since one can’t claim a utility function is irrational (excluding stuff like Intransitive preferences) this objection to cryonics isn’t irrational.
Perhaps the best way to formulate my argument would be as follows. When someone appears to care about his “normal” future self a few years from now, but not about his future self that might come out of a cryonics revival, you can argue that this is an arbitrary and whimsical preference, since the former “self” doesn’t have any significantly better claim to his identity than the latter. Now let’s set aside any possible counter-arguments to that claim, and for the sake of the argument accept that this is indeed so. I see three possible consequences of accepting it:
Starting to care about one’s post-cryonics future self, and (assuming one’s other concerns are satisfied) signing up for cryonics; this is presumably the intended goal of your argument.
Ceasing to care even about one’s “normal” future selves, and rejecting the very concept of personal identity and continuity. (Presumably leading to either complete resignation or to crazy impulsive behavior.)
Keeping one’s existing preferences and behaviors with the justification that, arbitrary and whimsical as they are, they are not more so than any other options, so you might as well not bother changing them.
Now, the question is: can you argue that (1) is more correct or rational than (2) or (3) in some meaningful way?
(Also, if someone is interested in discussions of this sort, I forgot to mention that I raised similar arguments in another recent thread.)
I can imagine somebody who picks (2) here, but still ends up acting more or less normally. You can take the attitude that the future person commonly identified with you is nobody special but be an altruist who cares about everybody, including that person. And as that person is (at least in the near future, and even in the far future when it comes to long-term decisions like education and life insurance) most susceptible to your (current) influence, you’ll pay still pay more attention to them. In the extreme case, the altruistic disciple of Adam Smith believes that everybody will be best off if each person cares only about the good of the future person commonly identified with them, because of the laws of economics rather than the laws of morality.
But as you say, this runs into (6). I think that with a pefectly altruistic attitude, you’d only fight to survive because you’re worried that this is a homicidal maniac who’s likely to terrorise others, or because you have some responsibilities to others that you can best fulfill. And that doesn’t extend to cryonics. So to take care of extreme altruists, rewrite (6) to specify that you know that your death will lead your attacker to reform and make restitution by living an altruistic life in your stead (but die of overexertion if you fight back).
Bottom line: if one takes consequence (2) of answering No to question (3), question (3) should still be considered solved (not an objection), but (6) still remains to be dealt with.