This is essentially PD (in the aspects relevant to this post), but without magical identification of Player1.Cooperate and Player2.Cooperate by virtue of using the same label “Cooperate”. Consider what happens with your thought experiment if the rule given by Omega is “You should give the same answer”, given that they already have differentiating info (assigned number). This distinction shouldn’t matter.
Recursion through decision-making of all relevant agents seems conceptually indispensable. When Player1 sees itself again through the eyes of Player2, symmetry or asymmetry between the players (as opposed to identity of the recursive copies of each player) becomes irrelevant.
Consider: if players are different, then Player1 knows about Player2 that knows about Player1 (recursion, a site of TDT-like acausal control); if players are the same, then Player1 knows about identical Player2 (recursion at the first step). If we go the first road, not much is lost, but we get more generality.
Ok, there are currently two suggested ways for agents to achieve logical correlation: through mutual prediction, or by using the same source code, and it sounds like you’re saying that the first method is more powerful. But so far I don’t really see how it can work at all. Can you explain how the mutual prediction approach would solve the problem given in my post, or any other problem that might show its advantage?
The two of you seem to be missing the point of this post. This sample problem isn’t hard or confusing in and of itself (like Newcomb’s Problem), but merely meant to illustrate a limitation of the usefulness of logical correlation in decision theory. The issue here isn’t whether we can find some way to make the right decision (obviously we can, and I gave a method in the post itself) but whether it can be made through consideration of logical correlation alone.
More generally, some people don’t seem to get what might be called “decision theoretic thinking”. When some decision problem is posted, they just start talking about how they would make the decision, instead of thinking about how to design an algorithm that would solve that problem and every other decision problem that it might face. Maybe I need to do a better job of explaining this?
Mitchell_Porter didn’t just solve the problem, he explained how he did it.
Did he do it “by consideration of logical correlation alone”? I do not know what that is intended to mean. Correlation normally has to be between two or more variables. In the post you talk about an agent taking account of “logical correlations between different instances of itself”. I don’t know what that means either.
More to the point, I don’t know why it is desirable. Surely one just wants to make the right decisions.
Expected utility maximisation solves this problem fine—if the agent has a tendency to use a similar tie-breaking strategy to Mitchell_Porter . If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
This sounds like your decision theory is “Decide to use the best decision theory.”
I guess there’s an analogy to people whose solution to the hard problems that humanity faces is “Build a superintelligent AI that will solve those hard problems.”
Not really—provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it—but I am not advocating that.
This is essentially PD (in the aspects relevant to this post), but without magical identification of Player1.Cooperate and Player2.Cooperate by virtue of using the same label “Cooperate”. Consider what happens with your thought experiment if the rule given by Omega is “You should give the same answer”, given that they already have differentiating info (assigned number). This distinction shouldn’t matter.
Recursion through decision-making of all relevant agents seems conceptually indispensable. When Player1 sees itself again through the eyes of Player2, symmetry or asymmetry between the players (as opposed to identity of the recursive copies of each player) becomes irrelevant.
Consider: if players are different, then Player1 knows about Player2 that knows about Player1 (recursion, a site of TDT-like acausal control); if players are the same, then Player1 knows about identical Player2 (recursion at the first step). If we go the first road, not much is lost, but we get more generality.
Ok, there are currently two suggested ways for agents to achieve logical correlation: through mutual prediction, or by using the same source code, and it sounds like you’re saying that the first method is more powerful. But so far I don’t really see how it can work at all. Can you explain how the mutual prediction approach would solve the problem given in my post, or any other problem that might show its advantage?
Mitchell_Porter explained in detail how he dealt with the problem. Perhaps consider his comment.
The two of you seem to be missing the point of this post. This sample problem isn’t hard or confusing in and of itself (like Newcomb’s Problem), but merely meant to illustrate a limitation of the usefulness of logical correlation in decision theory. The issue here isn’t whether we can find some way to make the right decision (obviously we can, and I gave a method in the post itself) but whether it can be made through consideration of logical correlation alone.
More generally, some people don’t seem to get what might be called “decision theoretic thinking”. When some decision problem is posted, they just start talking about how they would make the decision, instead of thinking about how to design an algorithm that would solve that problem and every other decision problem that it might face. Maybe I need to do a better job of explaining this?
Mitchell_Porter didn’t just solve the problem, he explained how he did it.
Did he do it “by consideration of logical correlation alone”? I do not know what that is intended to mean. Correlation normally has to be between two or more variables. In the post you talk about an agent taking account of “logical correlations between different instances of itself”. I don’t know what that means either.
More to the point, I don’t know why it is desirable. Surely one just wants to make the right decisions.
Expected utility maximisation solves this problem fine—if the agent has a tendency to use a similar tie-breaking strategy to Mitchell_Porter . If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
This sounds like your decision theory is “Decide to use the best decision theory.”
I guess there’s an analogy to people whose solution to the hard problems that humanity faces is “Build a superintelligent AI that will solve those hard problems.”
Not really—provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it—but I am not advocating that.