The observation is trivial mathematically, but it motivates the characterisation of an optimiser as something with the type-signature(X→R)→P(X).
You might instead be motivated to characterise optimisers by...
A utility function u:X→R
A quantifier (X→R)→P(X)
A preorder (R,≤) over the outcomes
Etc.
However, were you to characterise optimisers in any of the ways above, then the nash equilibrium between optimisers would not itself be an optimiser, and therefore we lose compositionality. The compositionality is conceptually helpfully because it means that your n≥2 definitions/theorems reduce to the n=1 case.
If we characterise an agent with a quantifier (X→R)→P(R), then we’re saying which payoffs the agent might achieve given each task. Namely, r∈q(u) if and only if it’s possible that the agent achieves payoff r∈R when faced with a task u:X→R.
But this definition doesn’t play well with a nash equilibria.
The observation is trivial mathematically, but it motivates the characterisation of an optimiser as something with the type-signature(X→R)→P(X).
You might instead be motivated to characterise optimisers by...
A utility function u:X→R
A quantifier (X→R)→P(X)
A preorder (R,≤) over the outcomes
Etc.
However, were you to characterise optimisers in any of the ways above, then the nash equilibrium between optimisers would not itself be an optimiser, and therefore we lose compositionality. The compositionality is conceptually helpfully because it means that your n≥2 definitions/theorems reduce to the n=1 case.
Here you mean (X→R)→P(X), right?
Wait I mean a quantifier in (X→R)→P(R).
If we characterise an agent with a quantifier (X→R)→P(R), then we’re saying which payoffs the agent might achieve given each task. Namely, r∈q(u) if and only if it’s possible that the agent achieves payoff r∈R when faced with a task u:X→R.
But this definition doesn’t play well with a nash equilibria.