Regarding 4: given that infra-Bayesianism is maximally paranoid, shouldn’t it have lower performance relative to decision-making theories like regular Bayes under many non-adversarial conditions? If the training set does not contain many instances of adversarial information, then shouldn’t we expect agents to adopt Bayes instead of infra-Bayes?
I think Vanessa would argue that “Bayesianism” is not really an option. The non-realizability problem in Bayesianism is not just some weird special case, but the normal state of things: Bayesianism assumes that we have hypotheses fully describing the world, which we very definitely don’t have in real life. IB tries to be less demanding, and the laws in the agent’s hypothesis class don’t necessarily need to be that detailed. I am relatively skeptical of this, and I believe that for an IB agent to work well, the laws in its hypothesis class probably also need to be unfeasibly detailed. So both “adopting Bayes” and “adopting infra-Bayes” fully is impossible. We probably won’t have such a nice mathematical model for the messy decision process a superintelligence actually adopts, the question is whether thinking about it as an approximation of Bayes or infra-Bayes gives us a more clear picture. It’s a hard question, and IB has an advantage in that the laws need to be less detailed, and a disadvantage that I think you are right about it being unnecessarily paranoid. My personal guess is that nothing besides the basic insight of Bayesianism (“the agent seems to update on evidence, sort of following Bayes-rule”) will be actually useful in understanding the way an AI will think.
Regarding 4: given that infra-Bayesianism is maximally paranoid, shouldn’t it have lower performance relative to decision-making theories like regular Bayes under many non-adversarial conditions? If the training set does not contain many instances of adversarial information, then shouldn’t we expect agents to adopt Bayes instead of infra-Bayes?
I think Vanessa would argue that “Bayesianism” is not really an option. The non-realizability problem in Bayesianism is not just some weird special case, but the normal state of things: Bayesianism assumes that we have hypotheses fully describing the world, which we very definitely don’t have in real life. IB tries to be less demanding, and the laws in the agent’s hypothesis class don’t necessarily need to be that detailed. I am relatively skeptical of this, and I believe that for an IB agent to work well, the laws in its hypothesis class probably also need to be unfeasibly detailed. So both “adopting Bayes” and “adopting infra-Bayes” fully is impossible. We probably won’t have such a nice mathematical model for the messy decision process a superintelligence actually adopts, the question is whether thinking about it as an approximation of Bayes or infra-Bayes gives us a more clear picture. It’s a hard question, and IB has an advantage in that the laws need to be less detailed, and a disadvantage that I think you are right about it being unnecessarily paranoid. My personal guess is that nothing besides the basic insight of Bayesianism (“the agent seems to update on evidence, sort of following Bayes-rule”) will be actually useful in understanding the way an AI will think.