The two of you seem to be missing the point of this post. This sample problem isn’t hard or confusing in and of itself (like Newcomb’s Problem), but merely meant to illustrate a limitation of the usefulness of logical correlation in decision theory. The issue here isn’t whether we can find some way to make the right decision (obviously we can, and I gave a method in the post itself) but whether it can be made through consideration of logical correlation alone.
More generally, some people don’t seem to get what might be called “decision theoretic thinking”. When some decision problem is posted, they just start talking about how they would make the decision, instead of thinking about how to design an algorithm that would solve that problem and every other decision problem that it might face. Maybe I need to do a better job of explaining this?
Mitchell_Porter didn’t just solve the problem, he explained how he did it.
Did he do it “by consideration of logical correlation alone”? I do not know what that is intended to mean. Correlation normally has to be between two or more variables. In the post you talk about an agent taking account of “logical correlations between different instances of itself”. I don’t know what that means either.
More to the point, I don’t know why it is desirable. Surely one just wants to make the right decisions.
Expected utility maximisation solves this problem fine—if the agent has a tendency to use a similar tie-breaking strategy to Mitchell_Porter . If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
This sounds like your decision theory is “Decide to use the best decision theory.”
I guess there’s an analogy to people whose solution to the hard problems that humanity faces is “Build a superintelligent AI that will solve those hard problems.”
Not really—provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it—but I am not advocating that.
Mitchell_Porter explained in detail how he dealt with the problem. Perhaps consider his comment.
The two of you seem to be missing the point of this post. This sample problem isn’t hard or confusing in and of itself (like Newcomb’s Problem), but merely meant to illustrate a limitation of the usefulness of logical correlation in decision theory. The issue here isn’t whether we can find some way to make the right decision (obviously we can, and I gave a method in the post itself) but whether it can be made through consideration of logical correlation alone.
More generally, some people don’t seem to get what might be called “decision theoretic thinking”. When some decision problem is posted, they just start talking about how they would make the decision, instead of thinking about how to design an algorithm that would solve that problem and every other decision problem that it might face. Maybe I need to do a better job of explaining this?
Mitchell_Porter didn’t just solve the problem, he explained how he did it.
Did he do it “by consideration of logical correlation alone”? I do not know what that is intended to mean. Correlation normally has to be between two or more variables. In the post you talk about an agent taking account of “logical correlations between different instances of itself”. I don’t know what that means either.
More to the point, I don’t know why it is desirable. Surely one just wants to make the right decisions.
Expected utility maximisation solves this problem fine—if the agent has a tendency to use a similar tie-breaking strategy to Mitchell_Porter . If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
This sounds like your decision theory is “Decide to use the best decision theory.”
I guess there’s an analogy to people whose solution to the hard problems that humanity faces is “Build a superintelligent AI that will solve those hard problems.”
Not really—provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it—but I am not advocating that.