It doesn’t really make sense to talk about the agent idealization at the same time as talking about effective precommitment (i.e. deterministic/probabilistic determination of actions).
The notion of an agent is an idealization of actual actors in terms of free choices, e.g., idealizing individuals in terms of choices of functions on game theoretic trees. This is an incompatible idealization with thinking of such actors as being deterministically or probabilistically committed to actions for those same ‘choices.’
Of course, ultimately, actual actors (e.g. people) are only approximated by talk of agents but if you try and simultaneously use the agent idealization while regarding those *same* choices as being effectively precommited you risk contradiction and model absurdity (of course you can decide to reduce the set of actions you regard as free choices in the agent idealization but that doesn’t seem to be the way you are talking about things here).
What do you mean by agent idealization? That seems key to understanding your comment, which I can’t follow at the moment.
EDIT: Actually, I just saw your comment above. I think TDT/UDT show how we can extend the agent idealization to cover these kinds of situations so that we can talk about both at the same time.
To the extent they define a particular idealization it’s one which isn’t interesting/compelling. What one would want to have to say there was a well defined question here is a single definition of what a rational agent is that everyone agreed on which one could then show favored such and such decision theory.
To put the point differently you and I can agree on absolutely every fact about the world and mathematics and yet disagree about which is the best decision theory because we simply mean slightly different things by rational agent. Moreover, there is no clear practical difference which presses us to use one definition or another like the practical usefulness of the aspects of the definition of rational agreement which yield the outcomes that all the theories agree on.
It doesn’t really make sense to talk about the agent idealization at the same time as talking about effective precommitment (i.e. deterministic/probabilistic determination of actions).
The notion of an agent is an idealization of actual actors in terms of free choices, e.g., idealizing individuals in terms of choices of functions on game theoretic trees. This is an incompatible idealization with thinking of such actors as being deterministically or probabilistically committed to actions for those same ‘choices.’
Of course, ultimately, actual actors (e.g. people) are only approximated by talk of agents but if you try and simultaneously use the agent idealization while regarding those *same* choices as being effectively precommited you risk contradiction and model absurdity (of course you can decide to reduce the set of actions you regard as free choices in the agent idealization but that doesn’t seem to be the way you are talking about things here).
What do you mean by agent idealization? That seems key to understanding your comment, which I can’t follow at the moment.
EDIT: Actually, I just saw your comment above. I think TDT/UDT show how we can extend the agent idealization to cover these kinds of situations so that we can talk about both at the same time.
To the extent they define a particular idealization it’s one which isn’t interesting/compelling. What one would want to have to say there was a well defined question here is a single definition of what a rational agent is that everyone agreed on which one could then show favored such and such decision theory.
To put the point differently you and I can agree on absolutely every fact about the world and mathematics and yet disagree about which is the best decision theory because we simply mean slightly different things by rational agent. Moreover, there is no clear practical difference which presses us to use one definition or another like the practical usefulness of the aspects of the definition of rational agreement which yield the outcomes that all the theories agree on.