I think you are adding further specifications to the original setting. Your original description assumed that AI is a very clever arguer who constructs very persuasive deceptive arguments. Now you assume that AI actively tries to make the arguments more persuasive than you expect. You can stipulate for argument’s sake that AI can always make more convincing argument than you expect, but 1) it’s not clear whether it’s even possible in realistic circumstances, 2) it obscures the (interesting and novel) original problem (“is evidence of evidence equally valuable as the evidence itself?”) by rather standard Newcomb-like mind-reading paradox.
I think you are adding further specifications to the original setting. Your original description assumed that AI is a very clever arguer who constructs very persuasive deceptive arguments. Now you assume that AI actively tries to make the arguments more persuasive than you expect. You can stipulate for argument’s sake that AI can always make more convincing argument than you expect, but 1) it’s not clear whether it’s even possible in realistic circumstances, 2) it obscures the (interesting and novel) original problem (“is evidence of evidence equally valuable as the evidence itself?”) by rather standard Newcomb-like mind-reading paradox.