Yes, thank you. As far as I can tell, (1) and (2) are closest to the meaning I inferred. I understand that we can consider them separately, but IMO (2) implies (1).
If an agent seeks to maximize its sense of well-being (as it would reasonable to assume humans do), then we would expect the agent to take actions which it believes will achieve this effect. Its beliefs could be wrong, of course, but since the agent is descended from a long line of evolutionarily successful agents, we can expect it to be right a lot more often that it’s wrong.
Thus, if the agent’s sense of well-being can be accurately predicted as being proportional to its status (regardless of whether the agent itself is aware of this or not), then it would be reasonable to assume that the agent will take actions that, on average, lead to raising its status.
Yes, thank you. As far as I can tell, (1) and (2) are closest to the meaning I inferred. I understand that we can consider them separately, but IMO (2) implies (1).
If an agent seeks to maximize its sense of well-being (as it would reasonable to assume humans do), then we would expect the agent to take actions which it believes will achieve this effect. Its beliefs could be wrong, of course, but since the agent is descended from a long line of evolutionarily successful agents, we can expect it to be right a lot more often that it’s wrong.
Thus, if the agent’s sense of well-being can be accurately predicted as being proportional to its status (regardless of whether the agent itself is aware of this or not), then it would be reasonable to assume that the agent will take actions that, on average, lead to raising its status.
Consider this explanation, too.