If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.
This sounds like your decision theory is “Decide to use the best decision theory.”
I guess there’s an analogy to people whose solution to the hard problems that humanity faces is “Build a superintelligent AI that will solve those hard problems.”
Not really—provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it—but I am not advocating that.
This sounds like your decision theory is “Decide to use the best decision theory.”
I guess there’s an analogy to people whose solution to the hard problems that humanity faces is “Build a superintelligent AI that will solve those hard problems.”
Not really—provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it—but I am not advocating that.