Mitchell_Porter didn’t just solve the problem, he explained how he did it.
Did he do it “by consideration of logical correlation alone”? I do not know what that is intended to mean. Correlation normally has to be between two or more variables. In the post you talk about an agent taking account of “logical correlations between different instances of itself”. I don’t know what that means either.
More to the point, I don’t know why it is desirable. Surely one just wants to make the right decisions.
Expected utility maximisation solves this problem fine—if the agent has a tendency to use a similar tie-breaking strategy to Mitchell_Porter . If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
This sounds like your decision theory is “Decide to use the best decision theory.”
I guess there’s an analogy to people whose solution to the hard problems that humanity faces is “Build a superintelligent AI that will solve those hard problems.”
Not really—provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it—but I am not advocating that.
Mitchell_Porter didn’t just solve the problem, he explained how he did it.
Did he do it “by consideration of logical correlation alone”? I do not know what that is intended to mean. Correlation normally has to be between two or more variables. In the post you talk about an agent taking account of “logical correlations between different instances of itself”. I don’t know what that means either.
More to the point, I don’t know why it is desirable. Surely one just wants to make the right decisions.
Expected utility maximisation solves this problem fine—if the agent has a tendency to use a similar tie-breaking strategy to Mitchell_Porter . If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
This sounds like your decision theory is “Decide to use the best decision theory.”
I guess there’s an analogy to people whose solution to the hard problems that humanity faces is “Build a superintelligent AI that will solve those hard problems.”
Not really—provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it—but I am not advocating that.