I am not sure I completely follow, but I think the point is that you will in fact update the probability up if a new argument is more convincing than you expect. Since AI can better estimate what you expect it to do than you can estimate how convincing AI will make it, it will be able to make all arguments more convincing than you expect.
I think you are adding further specifications to the original setting. Your original description assumed that AI is a very clever arguer who constructs very persuasive deceptive arguments. Now you assume that AI actively tries to make the arguments more persuasive than you expect. You can stipulate for argument’s sake that AI can always make more convincing argument than you expect, but 1) it’s not clear whether it’s even possible in realistic circumstances, 2) it obscures the (interesting and novel) original problem (“is evidence of evidence equally valuable as the evidence itself?”) by rather standard Newcomb-like mind-reading paradox.
I am not sure I completely follow, but I think the point is that you will in fact update the probability up if a new argument is more convincing than you expect. Since AI can better estimate what you expect it to do than you can estimate how convincing AI will make it, it will be able to make all arguments more convincing than you expect.
I think you are adding further specifications to the original setting. Your original description assumed that AI is a very clever arguer who constructs very persuasive deceptive arguments. Now you assume that AI actively tries to make the arguments more persuasive than you expect. You can stipulate for argument’s sake that AI can always make more convincing argument than you expect, but 1) it’s not clear whether it’s even possible in realistic circumstances, 2) it obscures the (interesting and novel) original problem (“is evidence of evidence equally valuable as the evidence itself?”) by rather standard Newcomb-like mind-reading paradox.