I can imagine somebody who picks (2) here, but still ends up acting more or less normally. You can take the attitude that the future person commonly identified with you is nobody special but be an altruist who cares about everybody, including that person. And as that person is (at least in the near future, and even in the far future when it comes to long-term decisions like education and life insurance) most susceptible to your (current) influence, you’ll pay still pay more attention to them. In the extreme case, the altruistic disciple of Adam Smith believes that everybody will be best off if each person cares only about the good of the future person commonly identified with them, because of the laws of economics rather than the laws of morality.
But as you say, this runs into (6). I think that with a pefectly altruistic attitude, you’d only fight to survive because you’re worried that this is a homicidal maniac who’s likely to terrorise others, or because you have some responsibilities to others that you can best fulfill. And that doesn’t extend to cryonics. So to take care of extreme altruists, rewrite (6) to specify that you know that your death will lead your attacker to reform and make restitution by living an altruistic life in your stead (but die of overexertion if you fight back).
Bottom line: if one takes consequence (2) of answering No to question (3), question (3) should still be considered solved (not an objection), but (6) still remains to be dealt with.
I can imagine somebody who picks (2) here, but still ends up acting more or less normally. You can take the attitude that the future person commonly identified with you is nobody special but be an altruist who cares about everybody, including that person. And as that person is (at least in the near future, and even in the far future when it comes to long-term decisions like education and life insurance) most susceptible to your (current) influence, you’ll pay still pay more attention to them. In the extreme case, the altruistic disciple of Adam Smith believes that everybody will be best off if each person cares only about the good of the future person commonly identified with them, because of the laws of economics rather than the laws of morality.
But as you say, this runs into (6). I think that with a pefectly altruistic attitude, you’d only fight to survive because you’re worried that this is a homicidal maniac who’s likely to terrorise others, or because you have some responsibilities to others that you can best fulfill. And that doesn’t extend to cryonics. So to take care of extreme altruists, rewrite (6) to specify that you know that your death will lead your attacker to reform and make restitution by living an altruistic life in your stead (but die of overexertion if you fight back).
Bottom line: if one takes consequence (2) of answering No to question (3), question (3) should still be considered solved (not an objection), but (6) still remains to be dealt with.