What I am talking about is the dimension from cooperation to conflict. I.e. jointly optimising the preferences of all interacting agents or optimising for one set of preferences at the expense of the preferences of the other involved agents.
This is a dimension any sufficiently intelligent agent that is trained on situations involving multi-agent interaction will learn. Even if it is only trained to maximize it’s own set of preferences. It’s a concept that is independent from the preferences of the agents in the single instances of interactions, so the definition of “best” is really not relevant at this level of abstraction.
It’s probably one of the most basic, the most general and one of only very few dimension of actions in multi-agent settings that is always salient.
That’s why I say that the two poles of that dimension in agent behavior are wells. (They are both very wide wells I would think. When I said “deep” I meant something more like “hard to dislodge from”.)
What I am talking about is the dimension from cooperation to conflict. I.e. jointly optimising the preferences of all interacting agents or optimising for one set of preferences at the expense of the preferences of the other involved agents.
This is a dimension any sufficiently intelligent agent that is trained on situations involving multi-agent interaction will learn. Even if it is only trained to maximize it’s own set of preferences. It’s a concept that is independent from the preferences of the agents in the single instances of interactions, so the definition of “best” is really not relevant at this level of abstraction.
It’s probably one of the most basic, the most general and one of only very few dimension of actions in multi-agent settings that is always salient.
That’s why I say that the two poles of that dimension in agent behavior are wells. (They are both very wide wells I would think. When I said “deep” I meant something more like “hard to dislodge from”.)