I wish you would just pretend that they care about things a million times further into the future than you do.
I don’t need to pretend. Modulo some mathematical details, it is the simple truth.
We have crossed wires here. What I meant is that I wish you would stop protesting about infinite utilities—and how non-discounters are not really even rational agents—and just model them as ordinary agents who discount a lot less than you do.
Objections about infinity strike me as irrelevant and uninteresting.
It is just that, since I cannot tell whether or not what I do will make such people happy, I have no motive to pay any attention to their preferences.
Is that your true objection? I expect you can figure out what would make these people happy fairly easily enough most of the time—e.g. by asking them.
Yes, the far-future is unpredictable—but in decision theory, that tends to make it a uniform grey—not an unpredictable black and white strobing pattern.
Yet, it seems that the people who care about the future do not agree with you on that. Bostrom, Yudkowsky, Nesov, et al. frequently invoke assessments of far-future consequences (sometimes in distant galaxies) in justifying their recommendations.
Indeed. That is partly poetry, though (big numbers make things seem important) - and partly because they think that the far future will be highly contingent on near future events.
The thing they are actually interested in influencing is mostly only a decade or so out. It does seem quite important—significant enough to reach back to us here anyway.
If what you are trying to understand is far enough away to be difficult to predict, and very important, then that might cause some oscillations. That is hardly a common situation, though.
Most of the time, organisms act as though want to become ancestors. To do that,
the best thing they can do is focus on having some grandkids. Expanding their circle of care out a few generations usually makes precious little difference to their actions. The far future is unforseen, and usually can’t be directly influenced. It is usually not too relevant. Usually, you leave it to your kids to deal with.
It is just that, since I cannot tell whether or not what I do will make such people happy, I have no motive to pay any attention to their preferences.
Is that your true objection? I expect you can figure out what would make these people happy fairly easily enough most of the time—e.g. by asking them.
That is a valid point. So, I am justified in treating them as rational agents to the extent that I can engage in trade with them. I just can’t enter into a long-term Nash bargain with them in which we jointly pledge to maximize some linear combination of our two utility functions in an unsupervised fashion. They can’t trust me to do what they want, and I can’t trust them to judge their own utility as bounded.
I think this is back to the point about infinities. The one I wish you would stop bringing up—and instead treat these folk as though they are discounting only a teeny, tiny bit.
Frankly, I generally find it hard to take these utilitarian types seriously in the first place. A “signalling” theory (holier-than-thou) explains the unusually high prevalance of utilitarianism among moral philosophers—and an “exploitation” theory explains its prevalance among those running charitable causes (utilitarianism-says-give-us-your-money). Those explanations do a good job of modelling the facts about utilitarianism—and are normally a lot more credible than the supplied justifications—IMHO.
I think this is back to the point about infinities.
Which suggests that we are failing to communicate. I am not surprised.
The one I wish you would stop bringing up—and instead treat these folk as though they are discounting only a teeny, tiny bit.
I do that! And I still discover that their utility functions are dominated by huge positive and negative utilities in the distant future, while mine are dominated by modest positive and negative utilities in the near future. They are still wrong even if they fudge it so that their math works.
I think this is back to the point about infinities.
Which suggests that we are failing to communicate. I am not surprised.
I went from your “I can’t trust them to judge their own utility as bounded” to your earlier “infinity” point. Possibly I am not trying very hard here, though...
My main issue was you apparently thinking that you couldn’t predict their desires in order to find mutually beneficial trades. I’m not really sure if this business about not being able to agree to maximise some shared function is a big deal for you.
We have crossed wires here. What I meant is that I wish you would stop protesting about infinite utilities—and how non-discounters are not really even rational agents—and just model them as ordinary agents who discount a lot less than you do.
Objections about infinity strike me as irrelevant and uninteresting.
Is that your true objection? I expect you can figure out what would make these people happy fairly easily enough most of the time—e.g. by asking them.
Indeed. That is partly poetry, though (big numbers make things seem important) - and partly because they think that the far future will be highly contingent on near future events.
The thing they are actually interested in influencing is mostly only a decade or so out. It does seem quite important—significant enough to reach back to us here anyway.
If what you are trying to understand is far enough away to be difficult to predict, and very important, then that might cause some oscillations. That is hardly a common situation, though.
Most of the time, organisms act as though want to become ancestors. To do that, the best thing they can do is focus on having some grandkids. Expanding their circle of care out a few generations usually makes precious little difference to their actions. The far future is unforseen, and usually can’t be directly influenced. It is usually not too relevant. Usually, you leave it to your kids to deal with.
That is a valid point. So, I am justified in treating them as rational agents to the extent that I can engage in trade with them. I just can’t enter into a long-term Nash bargain with them in which we jointly pledge to maximize some linear combination of our two utility functions in an unsupervised fashion. They can’t trust me to do what they want, and I can’t trust them to judge their own utility as bounded.
I think this is back to the point about infinities. The one I wish you would stop bringing up—and instead treat these folk as though they are discounting only a teeny, tiny bit.
Frankly, I generally find it hard to take these utilitarian types seriously in the first place. A “signalling” theory (holier-than-thou) explains the unusually high prevalance of utilitarianism among moral philosophers—and an “exploitation” theory explains its prevalance among those running charitable causes (utilitarianism-says-give-us-your-money). Those explanations do a good job of modelling the facts about utilitarianism—and are normally a lot more credible than the supplied justifications—IMHO.
Which suggests that we are failing to communicate. I am not surprised.
I do that! And I still discover that their utility functions are dominated by huge positive and negative utilities in the distant future, while mine are dominated by modest positive and negative utilities in the near future. They are still wrong even if they fudge it so that their math works.
I went from your “I can’t trust them to judge their own utility as bounded” to your earlier “infinity” point. Possibly I am not trying very hard here, though...
My main issue was you apparently thinking that you couldn’t predict their desires in order to find mutually beneficial trades. I’m not really sure if this business about not being able to agree to maximise some shared function is a big deal for you.