My guess is if the randomness is pseudorandom, then 1 for the behavior it chose and 0 for everything else; if the randomness is true randomness and we use Boltzmann rationality then all behaviors are equal utility; if the randomness is true and the agent is actually maximizing, then the abstraction breaks down?
I want to clarify that this is not a particularly useful type of utility function, and the post was a mostly-failed attempt to make it useful.
I want to clarify that this is not a particularly useful type of utility function, and the post was a mostly-failed attempt to make it useful.
Fair! Here’s another[1] issue I think, now that I’ve realized you were talking about utility functions over behaviours, at least if you allow ‘true’ randomness.
Consider a slight variant of matching pennies: if an agent doesn’t make a choice, their choice is made randomly for them.
Now consider the following agents:
Twitchbot.
An agent that always plays (truly) randomly.
An agent that always plays the best Nash equilibrium, tiebroken by the choice that results in them making the most decisions. (And then tiebroken arbitrarily from there, not that it matters in this case.)
These all end up with infinite random sequences of plays, ~50% heads and ~50% tails[2][3][4]. And any infinite random (50%) sequence of plays could be a plausible sequence of plays for either of these agents. And yet these agents ‘should’ have different decompositions into w and g.
Maybe. Or maybe I was misconstruing what you meant by ‘if the randomness is true and the agent is actually maximizing, then the abstraction breaks down’ and this is the same issue you recognized.
‘The’ best Nash equilibrium is any combination of choosing 50⁄50 randomly, and/or not playing. The tiebreak means the best combination is playing 50⁄50.
My guess is if the randomness is pseudorandom, then 1 for the behavior it chose and 0 for everything else; if the randomness is true randomness and we use Boltzmann rationality then all behaviors are equal utility; if the randomness is true and the agent is actually maximizing, then the abstraction breaks down?
I want to clarify that this is not a particularly useful type of utility function, and the post was a mostly-failed attempt to make it useful.
Fair! Here’s another[1] issue I think, now that I’ve realized you were talking about utility functions over behaviours, at least if you allow ‘true’ randomness.
Consider a slight variant of matching pennies: if an agent doesn’t make a choice, their choice is made randomly for them.
Now consider the following agents:
Twitchbot.
An agent that always plays (truly) randomly.
An agent that always plays the best Nash equilibrium, tiebroken by the choice that results in them making the most decisions. (And then tiebroken arbitrarily from there, not that it matters in this case.)
These all end up with infinite random sequences of plays, ~50% heads and ~50% tails[2][3][4]. And any infinite random (50%) sequence of plays could be a plausible sequence of plays for either of these agents. And yet these agents ‘should’ have different decompositions into w and g.
Maybe. Or maybe I was misconstruing what you meant by ‘if the randomness is true and the agent is actually maximizing, then the abstraction breaks down’ and this is the same issue you recognized.
Twitchbot doesn’t decide, so its decision is made randomly for it, so it’s 50⁄50.
The random agent decides randomly, so it’s 50⁄50.
‘The’ best Nash equilibrium is any combination of choosing 50⁄50 randomly, and/or not playing. The tiebreak means the best combination is playing 50⁄50.