I think the example with selfishness is wrong even on technical grounds. It’s pretty easy to construct examples where people will help even though they’ll suffer from it, and while you can construe weird reasons why even this would be selfish (like insane hyperbolic discounting), Occam’s razor says we should go with the simple explanation, i.e. people actually care about others. Nate’s post about it is good: http://mindingourway.com/the-stamp-collector/
Without really making a point here, I think it’s possible to make the definition of “selfishness” broad enough that really everything (a rational agent does) is selfish.
Like, you can also make the definition of “god” broad enough so that the probability of God existing gets arbitrarily close to 1 (for example, by allowing the gravitational force to be seen as a god). So, if we define “selfishness” as “maximizing your utility function” then every rational agent is selfish by the definition of “rational agent” (the utility function can value other people). Of course, as the text quoted above says, the word then has lost all its usefulness.
I think even an extreme example like: “What about an agent who is forced to do something that decreases their utility by threat of death?” falls under that broad definition because a rational agent will only go along with this if they expect death to be worse under their utility function.
Of course, humans are not really rational agents, so the original question of whether humans are always selfish is a bit harder to answer.
I think the example with selfishness is wrong even on technical grounds. It’s pretty easy to construct examples where people will help even though they’ll suffer from it, and while you can construe weird reasons why even this would be selfish (like insane hyperbolic discounting), Occam’s razor says we should go with the simple explanation, i.e. people actually care about others. Nate’s post about it is good: http://mindingourway.com/the-stamp-collector/
Without really making a point here, I think it’s possible to make the definition of “selfishness” broad enough that really everything (a rational agent does) is selfish.
Like, you can also make the definition of “god” broad enough so that the probability of God existing gets arbitrarily close to 1 (for example, by allowing the gravitational force to be seen as a god). So, if we define “selfishness” as “maximizing your utility function” then every rational agent is selfish by the definition of “rational agent” (the utility function can value other people). Of course, as the text quoted above says, the word then has lost all its usefulness.
I think even an extreme example like: “What about an agent who is forced to do something that decreases their utility by threat of death?” falls under that broad definition because a rational agent will only go along with this if they expect death to be worse under their utility function.
Of course, humans are not really rational agents, so the original question of whether humans are always selfish is a bit harder to answer.