It seems to me that “assuming some kind of tie breaking” is basically UDT1.1, is it not?
I agree that the generalization solves stag hunt. Do you think cooperative oracles are a good formalization of the generalization you intend? (Also, stag hunt involves differing utility functions, but they’re not really essential to the problem, unlike prisoner’s dilemma. I generally imagine a same-utility-function version for that reason, so what it’s pointing at is coordination problems which don’t involve asymmetry / tie-breaking.)
It seems to me that “assuming some kind of tie breaking” is basicallyUDT1.1, is it not?
Tie-breaking is just a technical assumption used in all decision theories (even VNM). It’s not the essence of UDT1.1 at all.
Imagine a variant of Wei’s coordination game where both agents prefer the (1,2) outcome to the (2,1) outcome. It’s not a tie, but UDT1 still can’t solve it, because there’s no obvious proof that the first agent returning 1 would imply any particular utility. You need to use the fact that the two agents receive different observations, and optimize the global strategy (input-output map). UDT1.1 supplies that, and so does my approach in the post.
(And then Stuart published ADT and Eliezer published FDT, which both mystifyingly dropped that idea and went back to UDT1...)
Do you think cooperative oracles are a good formalization of the generalization you intend?
Unlike cooperative oracles, I’m not trying to solve bargaining. I think game theory will never be reduced to decision theory, that’s like going back from Einstein to Newton. The contribution of my post is more about finding the natural boundaries of decision theory within game theory, and arguing that PD shouldn’t be included.
I think game theory will never be reduced to decision theory, that’s like going back from Einstein to Newton. The contribution of my post is more about finding the natural boundaries of decision theory within game theory, and arguing that PD shouldn’t be included.
You said something in the post that I’m going to assume is closely related:
(Also this shows how Von Neumann-Morgenstern expected utility maximization is basically a restriction of UDT to single player games with perfect recall. For imperfect recall (AMD) or multiple players (Psy-Kosh) you need the full version.)
I think I have two points which may shift you on this:
If agents are using reflective oracles, which is to a certain extent a natural toy model of agents reasoning about each other (since it solves the grain of truth problem, allowing us to represent Bayesians who can reason about other agents in the same way they reason about everything, rather than in the game-theoretic way where there’s a special thing you do in order to reason about agents), then AIXI-like constructions will play Nash equilibria. IE, Nash equilibria are then just a consequence of maximizing expected utility.
There’s a sense in which correlated equilibria are what you get if you want game-theory to follow from individual rationality axioms rather than generalize them; this is argued in Correlated Equilibrium as an Expression of Bayesian Rationality by Aumann.
Yeah, that’s similar to how Benya explained reflective oracles to me years ago. It made me very excited about the approach back then. But at some point I realized that to achieve anything better than mutual defection in the PD, the oracle needs to have a “will of its own”, pulling the players toward Pareto optimal outcomes. So I started seeing it as another top-down solution to game theory, and my excitement faded.
Maybe not much point in trying to sway my position now, because there are already people who believe in cooperative oracles and more power to them. But this also reminds me of a conversation I had with Patrick several months before the Modal Combat paper came out. Everyone was pretty excited about it then, but I kept saying it would lead to a zoo of solutions, not some unique best solution showing the way forward. Years later, that’s how it played out.
We don’t have any viable attack on game theory to date, and I can’t even imagine what it could look like. In the post I tried to do the next best thing and draw a line: these problems are amenable to decision theory and these aren’t. Maybe if I get it just right, one day it will show me an opening.
Yeah, I also put a significant probability on the “there’s going to be a zoo of solutions” model of game theory. I suppose I’ve recently been more optimistic than usual about non-zoo solutions.
Imagine a variant of Wei’s coordination game where both agents prefer the (1,2) outcome to the (2,1) outcome. It’s not a tie, but UDT1 still can’t solve it, because there’s no obvious proof that the first agent returning 1 would imply any particular utility. You need to use the fact that the two agents receive different observations, and optimize the global strategy (input-output map). UDT1.1 supplies that, and so does my approach in the post.
Ok. I thought by tie-breaking you were implying equilibrium selectien. I don’t understand how your approach in the post is doing anything more than tie-breaking, then. I still don’t understand any difference from what you’re indicating and Jessica’s post.
Unlike cooperative oracles, I’m not trying to solve bargaining.
What about agents with access to cooperative oracles, but which are conventianally rational (VNM rational) with respect to the probabilities provided by the cooperative oracle? This means they’re stuck in Nash equilibria (rather than doing anything nicer via bargaining), but the nash equilibria are (something close to) pareto-optimal in the set of Nash equilibria. This means in particular that you get UDT1.1 equilibrium selection; but, also you get reasonable generalizations of it to agents with different utility functions.
Is that similar to what you meant in your final paragraph?
Slightly offtopic to your questions (which I’ll try to answer in the other branch), but I’m surprised we seem to disagree on some simple stuff...
In my mind UDT1.1 isn’t about equilibrium selection:
1) Here’s a game that requires equilibrium selection, but doesn’t require UDT1.1 (can be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. If both press button 1, both win 100 dollars. If both press button 2, both win 200 dollars. If they press different buttons, they get nothing.
2) Here’s a game that has only one equilibrium, but requires UDT1.1 (can’t be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. The experimenter tells each of them which button to press (maybe the same, maybe different). If they both obey, both win 500 dollars. If only one obeys, both win 100 dollars. Otherwise nothing.
Maybe we understand UDT1 and UDT1.1 differently? I’m pretty sure I’m following Wei’s intent, where UDT1.1 simply fixes the bug in UDT1′s handling of observations.
The fix is straightforward in the case where every agent already has the same source code and preferences. UDT1.1, upon receiving input X, would put that input aside and first iterate through all possible input/output mappings that it could implement and determine the logical consequence of choosing each one upon the executions of the world programs that it cares about. After determining the optimal S* that best satisfies its preferences, it then outputs S*(X).
Since optimal global strategies are also Nash equilibria in the framework of Jessica’s post, we can think of global policy selection as equilibrium selection (at least to the extent that we buy that framework). You top-level post also seems to buy this connection (?).
I think your problem #1 superficially doesn’t sound like it requires UDT1.1, because it is very plausible that UDT1 can solve it based on a particular structure of correlations between Alice and Bob’s actions. But, actually, I suspect we need UDT1.1 to have very good guarantees; UDT1 is solving it via assumptions about correlation structure, not via some proposed mechanism which would systematically believe in such correlations.
I’m unclear on why you’re saying problem #2 requires UDT1.1. It is better to obey, unless you think obeying negatively correlates with your other copy obeying. Is that the source of difficulty you’re pointing at? We need UDT1.1 not to select equilibrium, but to ensure that we’re in any equilibrium at all?
Ah, I see. You’re thinking of both theories in a math-intuition-based setting (“negatively correlates with your other copy” etc). I prefer to use a crisp proof-based setting, so we can discern what we know about the theories from what we hope they would do in a more fuzzy setting.
UDT1 receives an observation X and then looks for provable facts of the form “if all my instances receiving observation X choose to take a certain action, I’ll get a certain utility”.
UDT1.1 also receives an observation X, but handles it differently. It looks for provable facts of the form “if all my instances receiving various observations choose to use a certain mapping from observations to actions, I’ll get a certain utility”. Then it looks up the action corresponding to X in the mapping.
In problem 2, a UDT1 player who’s told to press button 1 will look for facts like “if everyone who’s told to press button 1 complies, then utility is 500”. But there’s no easy way to prove such a fact. The utility value can only be inferred from the actions of both players, who might receive different observations. That’s why UDT1.1 is needed—to fix UDT1′s bug with handling observations.
The crisp setting makes it clear that UDT1.1 is about making more equilibria reachable, not about equilibrium selection. A game can have several equilibria, all of them reachable without UDT1.1, like my problem 1. Or it can have one equilibrium but require UDT1.1 to reach it, like my problem 2.
Of course, when we move to a math-intuition-based setting, the difference might become more fuzzy. Maybe UDT1 will solve some problems it couldn’t solve before, or maybe not. The only way to know is by formalizing math intuition.
It seems to me that “assuming some kind of tie breaking” is basically UDT1.1, is it not?
I agree that the generalization solves stag hunt. Do you think cooperative oracles are a good formalization of the generalization you intend? (Also, stag hunt involves differing utility functions, but they’re not really essential to the problem, unlike prisoner’s dilemma. I generally imagine a same-utility-function version for that reason, so what it’s pointing at is coordination problems which don’t involve asymmetry / tie-breaking.)
Tie-breaking is just a technical assumption used in all decision theories (even VNM). It’s not the essence of UDT1.1 at all.
Imagine a variant of Wei’s coordination game where both agents prefer the (1,2) outcome to the (2,1) outcome. It’s not a tie, but UDT1 still can’t solve it, because there’s no obvious proof that the first agent returning 1 would imply any particular utility. You need to use the fact that the two agents receive different observations, and optimize the global strategy (input-output map). UDT1.1 supplies that, and so does my approach in the post.
(And then Stuart published ADT and Eliezer published FDT, which both mystifyingly dropped that idea and went back to UDT1...)
Unlike cooperative oracles, I’m not trying to solve bargaining. I think game theory will never be reduced to decision theory, that’s like going back from Einstein to Newton. The contribution of my post is more about finding the natural boundaries of decision theory within game theory, and arguing that PD shouldn’t be included.
You said something in the post that I’m going to assume is closely related:
I think I have two points which may shift you on this:
If agents are using reflective oracles, which is to a certain extent a natural toy model of agents reasoning about each other (since it solves the grain of truth problem, allowing us to represent Bayesians who can reason about other agents in the same way they reason about everything, rather than in the game-theoretic way where there’s a special thing you do in order to reason about agents), then AIXI-like constructions will play Nash equilibria. IE, Nash equilibria are then just a consequence of maximizing expected utility.
There’s a sense in which correlated equilibria are what you get if you want game-theory to follow from individual rationality axioms rather than generalize them; this is argued in Correlated Equilibrium as an Expression of Bayesian Rationality by Aumann.
Yeah, that’s similar to how Benya explained reflective oracles to me years ago. It made me very excited about the approach back then. But at some point I realized that to achieve anything better than mutual defection in the PD, the oracle needs to have a “will of its own”, pulling the players toward Pareto optimal outcomes. So I started seeing it as another top-down solution to game theory, and my excitement faded.
Maybe not much point in trying to sway my position now, because there are already people who believe in cooperative oracles and more power to them. But this also reminds me of a conversation I had with Patrick several months before the Modal Combat paper came out. Everyone was pretty excited about it then, but I kept saying it would lead to a zoo of solutions, not some unique best solution showing the way forward. Years later, that’s how it played out.
We don’t have any viable attack on game theory to date, and I can’t even imagine what it could look like. In the post I tried to do the next best thing and draw a line: these problems are amenable to decision theory and these aren’t. Maybe if I get it just right, one day it will show me an opening.
Yeah, I also put a significant probability on the “there’s going to be a zoo of solutions” model of game theory. I suppose I’ve recently been more optimistic than usual about non-zoo solutions.
Ok. I thought by tie-breaking you were implying equilibrium selectien. I don’t understand how your approach in the post is doing anything more than tie-breaking, then. I still don’t understand any difference from what you’re indicating and Jessica’s post.
What about agents with access to cooperative oracles, but which are conventianally rational (VNM rational) with respect to the probabilities provided by the cooperative oracle? This means they’re stuck in Nash equilibria (rather than doing anything nicer via bargaining), but the nash equilibria are (something close to) pareto-optimal in the set of Nash equilibria. This means in particular that you get UDT1.1 equilibrium selection; but, also you get reasonable generalizations of it to agents with different utility functions.
Is that similar to what you meant in your final paragraph?
Slightly offtopic to your questions (which I’ll try to answer in the other branch), but I’m surprised we seem to disagree on some simple stuff...
In my mind UDT1.1 isn’t about equilibrium selection:
1) Here’s a game that requires equilibrium selection, but doesn’t require UDT1.1 (can be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. If both press button 1, both win 100 dollars. If both press button 2, both win 200 dollars. If they press different buttons, they get nothing.
2) Here’s a game that has only one equilibrium, but requires UDT1.1 (can’t be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. The experimenter tells each of them which button to press (maybe the same, maybe different). If they both obey, both win 500 dollars. If only one obeys, both win 100 dollars. Otherwise nothing.
Maybe we understand UDT1 and UDT1.1 differently? I’m pretty sure I’m following Wei’s intent, where UDT1.1 simply fixes the bug in UDT1′s handling of observations.
The title of the UDT1.1 post is “explicit optimization of global strategy”. The key paragraph:
Since optimal global strategies are also Nash equilibria in the framework of Jessica’s post, we can think of global policy selection as equilibrium selection (at least to the extent that we buy that framework). You top-level post also seems to buy this connection (?).
I think your problem #1 superficially doesn’t sound like it requires UDT1.1, because it is very plausible that UDT1 can solve it based on a particular structure of correlations between Alice and Bob’s actions. But, actually, I suspect we need UDT1.1 to have very good guarantees; UDT1 is solving it via assumptions about correlation structure, not via some proposed mechanism which would systematically believe in such correlations.
I’m unclear on why you’re saying problem #2 requires UDT1.1. It is better to obey, unless you think obeying negatively correlates with your other copy obeying. Is that the source of difficulty you’re pointing at? We need UDT1.1 not to select equilibrium, but to ensure that we’re in any equilibrium at all?
Ah, I see. You’re thinking of both theories in a math-intuition-based setting (“negatively correlates with your other copy” etc). I prefer to use a crisp proof-based setting, so we can discern what we know about the theories from what we hope they would do in a more fuzzy setting.
UDT1 receives an observation X and then looks for provable facts of the form “if all my instances receiving observation X choose to take a certain action, I’ll get a certain utility”.
UDT1.1 also receives an observation X, but handles it differently. It looks for provable facts of the form “if all my instances receiving various observations choose to use a certain mapping from observations to actions, I’ll get a certain utility”. Then it looks up the action corresponding to X in the mapping.
In problem 2, a UDT1 player who’s told to press button 1 will look for facts like “if everyone who’s told to press button 1 complies, then utility is 500”. But there’s no easy way to prove such a fact. The utility value can only be inferred from the actions of both players, who might receive different observations. That’s why UDT1.1 is needed—to fix UDT1′s bug with handling observations.
The crisp setting makes it clear that UDT1.1 is about making more equilibria reachable, not about equilibrium selection. A game can have several equilibria, all of them reachable without UDT1.1, like my problem 1. Or it can have one equilibrium but require UDT1.1 to reach it, like my problem 2.
Of course, when we move to a math-intuition-based setting, the difference might become more fuzzy. Maybe UDT1 will solve some problems it couldn’t solve before, or maybe not. The only way to know is by formalizing math intuition.