I think the flowchart for thinking about this question should look something like:
If in a least convenient possible world where following your interests did not maximize utility, are you pretty sure you really would forego your personal interests to maximize utility? If no, go to 2; if yes, go to 3.
Why are you even thinking about this question? Are you just trying to come up with a clever argument for something you’re going to do anyway?
Okay, now you can think about this question.
I can’t answer your question because I’ve never gotten past 2.
I mostly-agree, except that question 1 shouldn’t say:
“In a least convenient world, would you utterly forgo all interest in return for making some small difference to global utility”.
It should say: “… is there any extent to which impact on strangers’ well-being would influence your choices? For example, if you were faced with a choice between reading a chapter of a kind-of-interesting book with no external impact, or doing chores for an hour and thereby saving a child’s life, would you sometimes choose the latter?”
If the answer to that latter question is yes—if expected impact on others’ well-being can potentially sway your actions at some margin—then it is worth looking into the empirical details, and seeing what bundles of global well-being and personal well-being can actually be bought, and how attractive those bundles are.
I object to this being framed as primarily about others versus self. I pursue FAI for the perfectly selfish reason that it maximizes my expected life span and quality. I think the conflict being discussed is about near interest conflicting with far interest, and how near interest creates more motivation.
Because even if we don’t have the strength or desire to willingly renounce all selfishness, we recognize that better versions of ourselves would do so, and that perhaps there’s a good way to make some lifestyle changes that look like personal sacrifices but are actually net positive (and even more so when we nurture our sense of altruism)?
Not a clever argument, more of an admission of current weakness. Admitting current weakness has the advantage of having the obvious next step of “consider becoming stronger”.
But saying “Pursuing my interests would increase utility anyway” has the disadvantage of requiring no further actions. Which is fine if it’s true, but if you evaluate the truth of the statement while you still have this potential source of bias lurking in the background, it might not be.
For what it’s worth, I have no trouble answering “yes” to 1, because for me it doesn’t have the altruistic connotations it probably has for other people. My utility function is very selfish and I’m okay with that.
Maybe the personal interests are the real utility, but we don’t want to admit it—because for our survival as members of social species it is better to pretend that our utility is aligned with utility of others, although it is probably just weakly positively correlated. In more complex society the correlation is probably even weaker, because the choicespace is larger.
Or maybe just the utility choice mechanism is broken in this society, because it evolved in an ancient environment. It probably uses some mechanism like “if you are rewarded for doing something, or if you see that someone is rewarded for something, then develop a strong desire… and keep the desire even in times when you are not rewarded (because it may require long-term effort)”. In ancient environment you would see rewards mostly for useful things. Today there are too many exciting things—I don’t say they are mostly bad, just that there is too much of them, so people’s utility functions are spread too much, and sometimes there are not enough people with desire to do some critical tasks.
I think the flowchart for thinking about this question should look something like:
If in a least convenient possible world where following your interests did not maximize utility, are you pretty sure you really would forego your personal interests to maximize utility? If no, go to 2; if yes, go to 3.
Why are you even thinking about this question? Are you just trying to come up with a clever argument for something you’re going to do anyway?
Okay, now you can think about this question.
I can’t answer your question because I’ve never gotten past 2.
I mostly-agree, except that question 1 shouldn’t say:
“In a least convenient world, would you utterly forgo all interest in return for making some small difference to global utility”.
It should say: “… is there any extent to which impact on strangers’ well-being would influence your choices? For example, if you were faced with a choice between reading a chapter of a kind-of-interesting book with no external impact, or doing chores for an hour and thereby saving a child’s life, would you sometimes choose the latter?”
If the answer to that latter question is yes—if expected impact on others’ well-being can potentially sway your actions at some margin—then it is worth looking into the empirical details, and seeing what bundles of global well-being and personal well-being can actually be bought, and how attractive those bundles are.
I object to this being framed as primarily about others versus self. I pursue FAI for the perfectly selfish reason that it maximizes my expected life span and quality. I think the conflict being discussed is about near interest conflicting with far interest, and how near interest creates more motivation.
Because even if we don’t have the strength or desire to willingly renounce all selfishness, we recognize that better versions of ourselves would do so, and that perhaps there’s a good way to make some lifestyle changes that look like personal sacrifices but are actually net positive (and even more so when we nurture our sense of altruism)?
Isn’t this statement also a clever argument for why you’re not going to do it anyway, at least to an extent?
Not a clever argument, more of an admission of current weakness. Admitting current weakness has the advantage of having the obvious next step of “consider becoming stronger”.
But saying “Pursuing my interests would increase utility anyway” has the disadvantage of requiring no further actions. Which is fine if it’s true, but if you evaluate the truth of the statement while you still have this potential source of bias lurking in the background, it might not be.
For what it’s worth, I have no trouble answering “yes” to 1, because for me it doesn’t have the altruistic connotations it probably has for other people. My utility function is very selfish and I’m okay with that.
Maybe the personal interests are the real utility, but we don’t want to admit it—because for our survival as members of social species it is better to pretend that our utility is aligned with utility of others, although it is probably just weakly positively correlated. In more complex society the correlation is probably even weaker, because the choicespace is larger.
Or maybe just the utility choice mechanism is broken in this society, because it evolved in an ancient environment. It probably uses some mechanism like “if you are rewarded for doing something, or if you see that someone is rewarded for something, then develop a strong desire… and keep the desire even in times when you are not rewarded (because it may require long-term effort)”. In ancient environment you would see rewards mostly for useful things. Today there are too many exciting things—I don’t say they are mostly bad, just that there is too much of them, so people’s utility functions are spread too much, and sometimes there are not enough people with desire to do some critical tasks.