Yes, thank you. As far as I can tell, (1) and (2) are closest to the meaning I inferred. I understand that we can consider them separately, but IMO (2) implies (1).
If an agent seeks to maximize its sense of well-being (as it would reasonable to assume humans do), then we would expect the agent to take actions which it believes will achieve this effect. Its beliefs could be wrong, of course, but since the agent is descended from a long line of evolutionarily successful agents, we can expect it to be right a lot more often that it’s wrong.
Thus, if the agent’s sense of well-being can be accurately predicted as being proportional to its status (regardless of whether the agent itself is aware of this or not), then it would be reasonable to assume that the agent will take actions that, on average, lead to raising its status.
The three interpretations I mean are:
(1) People’s behavior is accurately predicted by modeling them as status-maximizing agents.
(2) People’s subjective experience of well-being is accurately predicted by modeling it as proportional to status.
(3) A person is well-off, in the sense that an altruist should care about, in proportion to their status.
Is that clearer?
Yes, thank you. As far as I can tell, (1) and (2) are closest to the meaning I inferred. I understand that we can consider them separately, but IMO (2) implies (1).
If an agent seeks to maximize its sense of well-being (as it would reasonable to assume humans do), then we would expect the agent to take actions which it believes will achieve this effect. Its beliefs could be wrong, of course, but since the agent is descended from a long line of evolutionarily successful agents, we can expect it to be right a lot more often that it’s wrong.
Thus, if the agent’s sense of well-being can be accurately predicted as being proportional to its status (regardless of whether the agent itself is aware of this or not), then it would be reasonable to assume that the agent will take actions that, on average, lead to raising its status.
Consider this explanation, too.