To what extent should an agent’s utility definition extend beyond their own person?
I’m not sure how to evaluate “should” in the question, but most people I know (including myself) “do” include events they’ll never directly perceive in their decisions.
Personally, I recognize that some of my current happiness and motivation is based on imagining potential future events that I think are exceedingly unlikely for me to actually experience. I make decisions based on likely impact on others outside of my perception-cone, such as strangers I’ll never meet or interact with, and who may well be figments of the mass-media’s imagination.
Whether these un-meetable person-placeholders in my imagined decision-consequence timeline are contemporaneous but physically removed, or distantly removed in time is kind of irrelevant.
I’m not sure how to evaluate “should” in the question, but most people I know (including myself) “do” include events they’ll never directly perceive in their decisions.
Personally, I recognize that some of my current happiness and motivation is based on imagining potential future events that I think are exceedingly unlikely for me to actually experience. I make decisions based on likely impact on others outside of my perception-cone, such as strangers I’ll never meet or interact with, and who may well be figments of the mass-media’s imagination.
Whether these un-meetable person-placeholders in my imagined decision-consequence timeline are contemporaneous but physically removed, or distantly removed in time is kind of irrelevant.
I wonder what this philosophical stance is called?