How about arguments from analogy that are broadly evocative.
The underlying question seems to be:
To what extent should an agent’s utility definition extend beyond their own person?
A ready example would be effective altruism—should an effective altruism care about bequesting their fortune after death, given that they are not around to process the outcome? Intuitively, subcultural conditioning would bring most people to say yes, but how about if I turned it around? For example, a suicidal woman may strongly advocate for a right to suicide. Would she be maximising her utility to publish a note to the effect of calling on potential like-minded suicidees to kill anti-suicide policy-makers/politicians before they kill themselves, in order to pressure them to change their stance and raise awareness for dying with dignity? The post-death value maximising approach should be consistent in both the EA and suicide example, I should think.
To what extent should an agent’s utility definition extend beyond their own person?
I’m not sure how to evaluate “should” in the question, but most people I know (including myself) “do” include events they’ll never directly perceive in their decisions.
Personally, I recognize that some of my current happiness and motivation is based on imagining potential future events that I think are exceedingly unlikely for me to actually experience. I make decisions based on likely impact on others outside of my perception-cone, such as strangers I’ll never meet or interact with, and who may well be figments of the mass-media’s imagination.
Whether these un-meetable person-placeholders in my imagined decision-consequence timeline are contemporaneous but physically removed, or distantly removed in time is kind of irrelevant.
How about arguments from analogy that are broadly evocative.
The underlying question seems to be:
To what extent should an agent’s utility definition extend beyond their own person?
A ready example would be effective altruism—should an effective altruism care about bequesting their fortune after death, given that they are not around to process the outcome? Intuitively, subcultural conditioning would bring most people to say yes, but how about if I turned it around? For example, a suicidal woman may strongly advocate for a right to suicide. Would she be maximising her utility to publish a note to the effect of calling on potential like-minded suicidees to kill anti-suicide policy-makers/politicians before they kill themselves, in order to pressure them to change their stance and raise awareness for dying with dignity? The post-death value maximising approach should be consistent in both the EA and suicide example, I should think.
I’m not sure how to evaluate “should” in the question, but most people I know (including myself) “do” include events they’ll never directly perceive in their decisions.
Personally, I recognize that some of my current happiness and motivation is based on imagining potential future events that I think are exceedingly unlikely for me to actually experience. I make decisions based on likely impact on others outside of my perception-cone, such as strangers I’ll never meet or interact with, and who may well be figments of the mass-media’s imagination.
Whether these un-meetable person-placeholders in my imagined decision-consequence timeline are contemporaneous but physically removed, or distantly removed in time is kind of irrelevant.
I wonder what this philosophical stance is called?