Here they write: “A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility.”
I would not share that definition, and I don’t think most other people commenting on this post would either (I know there is some irony to that, given that it’s the definition given on the LessWrong wiki).
Often the words/concepts we use don’t have clear boundaries (more about that here). I think agent is such a word/concept.
Examples of “agents” (← by my conception of the term) that don’t quite have utility functions would be humans.
How we may define “agent” may be less important if what we really are interested in is the behavior/properties of “software-programs with extreme and broad mental capabilities”.
Future states—numeric value of agent’s utility function in the future
I don’t think all extremely capable minds/machines/programs would need an explicit utility-function, or even an implicit one.
To be clear, there are many cases where I think it would be “stupid” to not act as if you have (an explicit or implicit) utility function (in some sense). But I don’t think it’s required of all extremely mentally capable systems (even if these systems are required to have logically contradictory “beliefs”).
I assume you mean “provide definitions”:
Agent—https://www.lesswrong.com/tag/agent
Care—https://www.lesswrong.com/tag/preference
Future states—numeric value of agent’s utility function in the future
Does it make sense?
More or less / close enough 🙂
Here they write: “A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility.”
I would not share that definition, and I don’t think most other people commenting on this post would either (I know there is some irony to that, given that it’s the definition given on the LessWrong wiki).
Often the words/concepts we use don’t have clear boundaries (more about that here). I think agent is such a word/concept.
Examples of “agents” (← by my conception of the term) that don’t quite have utility functions would be humans.
How we may define “agent” may be less important if what we really are interested in is the behavior/properties of “software-programs with extreme and broad mental capabilities”.
I don’t think all extremely capable minds/machines/programs would need an explicit utility-function, or even an implicit one.
To be clear, there are many cases where I think it would be “stupid” to not act as if you have (an explicit or implicit) utility function (in some sense). But I don’t think it’s required of all extremely mentally capable systems (even if these systems are required to have logically contradictory “beliefs”).