If indeed you know of a course of action that would benefit someone more than the course they currently want to go on, you can provide them an incentive to change their mind willingly. A bet would do: “If you try X instead and afterward don’t honestly think it’s better than the Y you thought you wanted, I’ll pay Z in recompense for your troubles.” (Of course, I’m skeptical that me-in-the-future has any right to define what was best retroactively for me-in-the-past, due to not actually being exactly the same person, but let’s just assume that for now.)
This is totally ethical and does not infringe upon subjective freedom of will. I do not think anyone has the right to force anyone else to change their mind or act against what they believe they want unless their preferred course of action would actually endanger their life (as in the case of a parent picking up their toddler who walks into the road). Even if they’re wrong, it’s their responsibility to be wrong and learn from it, not be saved from their own not-yet-made mistakes.
I haven’t yet decided if interfering with intentional suicide is ethical or not. (My suspicion is that suicide is immoral, as it is murder of all one’s possible future selves who would not, were they present now, consent to being prevented from existing, meaning that preventing suicide is likely an acceptable tradeoff protecting their rights while infringing upon those of the suicidal person. But it will take more thought.)
To me it seems that the individual is always the arbiter of what is best for them. Only that individual—not anyone else, not even an AI modeling their mind. Of course, a sufficiently powerful AI would easily be able to convince them to desire different things using that mind model, but the extrapolated volition is nonetheless not legitimate until willingly accepted by the person—the AI does not have the right to implement it independently without consent. (And I, personally, would not give blanket consent for an AI to manage my affairs.)
Hmm. That suicide example does present a way in which your view here could be interpreted as true within my framework, now that I think about it. But since I don’t consider entities to be identical to past or future versions of themselves, it sounded very wrong to me. Nobody can be wrong about what they want right now. But people can be mistaken about what future versions of themselves would have wanted them to do right now, due to lack of knowledge about the future, and inasmuch as you consider yourself, though not identical to them, to be continuous with them (the same person “in essence”), you ought to take their desires into account—and since you can be mistaken about that, others who can prove they know better about that matter have the right to interfere on their behalf… but only those future selves have the right to say whether that interference was legitimate or not. Hence the bet I described at the beginning. Interesting! Thanks for the opportunity to think about this.
If indeed you know of a course of action that would benefit someone more than the course they currently want to go on, you can provide them an incentive to change their mind willingly. A bet would do: “If you try X instead and afterward don’t honestly think it’s better than the Y you thought you wanted, I’ll pay Z in recompense for your troubles.” (Of course, I’m skeptical that me-in-the-future has any right to define what was best retroactively for me-in-the-past, due to not actually being exactly the same person, but let’s just assume that for now.)
This is totally ethical and does not infringe upon subjective freedom of will. I do not think anyone has the right to force anyone else to change their mind or act against what they believe they want unless their preferred course of action would actually endanger their life (as in the case of a parent picking up their toddler who walks into the road). Even if they’re wrong, it’s their responsibility to be wrong and learn from it, not be saved from their own not-yet-made mistakes.
I haven’t yet decided if interfering with intentional suicide is ethical or not. (My suspicion is that suicide is immoral, as it is murder of all one’s possible future selves who would not, were they present now, consent to being prevented from existing, meaning that preventing suicide is likely an acceptable tradeoff protecting their rights while infringing upon those of the suicidal person. But it will take more thought.)
To me it seems that the individual is always the arbiter of what is best for them. Only that individual—not anyone else, not even an AI modeling their mind. Of course, a sufficiently powerful AI would easily be able to convince them to desire different things using that mind model, but the extrapolated volition is nonetheless not legitimate until willingly accepted by the person—the AI does not have the right to implement it independently without consent. (And I, personally, would not give blanket consent for an AI to manage my affairs.)
Hmm. That suicide example does present a way in which your view here could be interpreted as true within my framework, now that I think about it. But since I don’t consider entities to be identical to past or future versions of themselves, it sounded very wrong to me. Nobody can be wrong about what they want right now. But people can be mistaken about what future versions of themselves would have wanted them to do right now, due to lack of knowledge about the future, and inasmuch as you consider yourself, though not identical to them, to be continuous with them (the same person “in essence”), you ought to take their desires into account—and since you can be mistaken about that, others who can prove they know better about that matter have the right to interfere on their behalf… but only those future selves have the right to say whether that interference was legitimate or not. Hence the bet I described at the beginning. Interesting! Thanks for the opportunity to think about this.