The problem with this kind of analysis is that one is using the intuition of a physical scenario to leverage an ambiguity in what we mean by agent and decision.
Ultimately, the notion of decisions and agents are idealizations. Any actual person or AI only acts as the laws of physics dictate and agents, decisions or choices don’t appear in any description in terms of fundamental physics. Since people (and programs) are complex systems that often make relatively sophisticated choices about their actions we introduce the idealization of agents and decisions.
That idealization is basically what one sees in the standard formulation of game theory in terms of trees, visibility conditions and payoffs with decisions simply being nodes on the tree and agents being a certain kind of function from visible outcomes and nodes to children of those nodes. The math is all perfectly clear and there is nothing paradoxical or troubling.
What makes it seem like there is a problem is when we redescribe the situation in terms of guarantees the other player will have predicted your choice in a certain way or the like. Formally, that doesn’t really make sense...or at least it corresponds to a radically different game, e.g., restricting the tree so that only those outcomes are allowed. However, because we have this other non-formal notion of choice and agent stuck in our heads (choice is something like picking what socks to wear agent is something like a person) we don’t realize that our idealization just changed drastically even though in common language we are still playing the same game.
In other words there are no extra facts to be found about which decision theory is best. There are facts about what actual physical systems will do and there are mathematical facts about trees and functions on them but there isn’t any room for further facts about what kind of decision theory is the true one.
Was there also no room for the fact that VNM utility maximization is useful? I’m looking for the next step in usefulness after VNM and UDT. Or you could say I’m looking for some nice math to build into an AI. My criteria for judging that math are messy, but that doesn’t mean they don’t exist.
The problem with this kind of analysis is that one is using the intuition of a physical scenario to leverage an ambiguity in what we mean by agent and decision.
Ultimately, the notion of decisions and agents are idealizations. Any actual person or AI only acts as the laws of physics dictate and agents, decisions or choices don’t appear in any description in terms of fundamental physics. Since people (and programs) are complex systems that often make relatively sophisticated choices about their actions we introduce the idealization of agents and decisions.
That idealization is basically what one sees in the standard formulation of game theory in terms of trees, visibility conditions and payoffs with decisions simply being nodes on the tree and agents being a certain kind of function from visible outcomes and nodes to children of those nodes. The math is all perfectly clear and there is nothing paradoxical or troubling.
What makes it seem like there is a problem is when we redescribe the situation in terms of guarantees the other player will have predicted your choice in a certain way or the like. Formally, that doesn’t really make sense...or at least it corresponds to a radically different game, e.g., restricting the tree so that only those outcomes are allowed. However, because we have this other non-formal notion of choice and agent stuck in our heads (choice is something like picking what socks to wear agent is something like a person) we don’t realize that our idealization just changed drastically even though in common language we are still playing the same game.
In other words there are no extra facts to be found about which decision theory is best. There are facts about what actual physical systems will do and there are mathematical facts about trees and functions on them but there isn’t any room for further facts about what kind of decision theory is the true one.
Was there also no room for the fact that VNM utility maximization is useful? I’m looking for the next step in usefulness after VNM and UDT. Or you could say I’m looking for some nice math to build into an AI. My criteria for judging that math are messy, but that doesn’t mean they don’t exist.