I realized today that UDT doesn’t really need the assumption that other players use UDT.
Was there ever such an assumption? I recall a formulation in which the possible “worlds” include everything that feeds into the decision algorithm, and it doesn’t matter if there are any games and/or other players inside of those worlds (their treatment is the same, as are corresponding reasons for using UDT).
Yeah, it’s a bit subtle and I’m not sure it even makes sense. But the idea goes something like this.
Most formulations of UDT are self-referential: “determine the logical consequences of this algorithm behaving so-and-so”. That automatically takes into account all other instances of this algorithm that happen to exist in the world, as you describe. But in this post I’m trying to handwave a non-self-referential version: “If you’re playing a game where everyone has the same utility function, follow the simplest Nash equilibrium maximizing everyone’s expected utility, no matter how your beliefs change during the game”. That can be seen as an individually rational decision! The other players don’t have to be isomorphic to you, as long as they are rational enough and have no incentive to cheat you.
That goes against something I’ve been telling people for years—that UDT cannot be used in real life, because the self-referential version requires proving detailed theorems about other people’s minds. The idea in this post can be used in real life. The fact that it can’t handle PD is a nice sanity check, because cooperating in PD requires proving detailed theorems to prevent cheating, while the problems I’m solving have no incentives to cheat in the first place.
Was there ever such an assumption? I recall a formulation in which the possible “worlds” include everything that feeds into the decision algorithm, and it doesn’t matter if there are any games and/or other players inside of those worlds (their treatment is the same, as are corresponding reasons for using UDT).
Yeah, it’s a bit subtle and I’m not sure it even makes sense. But the idea goes something like this.
Most formulations of UDT are self-referential: “determine the logical consequences of this algorithm behaving so-and-so”. That automatically takes into account all other instances of this algorithm that happen to exist in the world, as you describe. But in this post I’m trying to handwave a non-self-referential version: “If you’re playing a game where everyone has the same utility function, follow the simplest Nash equilibrium maximizing everyone’s expected utility, no matter how your beliefs change during the game”. That can be seen as an individually rational decision! The other players don’t have to be isomorphic to you, as long as they are rational enough and have no incentive to cheat you.
That goes against something I’ve been telling people for years—that UDT cannot be used in real life, because the self-referential version requires proving detailed theorems about other people’s minds. The idea in this post can be used in real life. The fact that it can’t handle PD is a nice sanity check, because cooperating in PD requires proving detailed theorems to prevent cheating, while the problems I’m solving have no incentives to cheat in the first place.