Thanks for the clarification, for some reason I didn’t understand that in the original post. Nonetheless, the same basic reasoning applies, no? For all the AI knows, it’s being trained on ancient data as part of some historical experiment, and we have successfully factored RSA-2048 thousands of years ago. If it does find it “in the wild,” that gives it important information about the minimum compute available in the universe that it’s in, but I don’t think it should necessarily reduce its belief that it’s in an adversarial testing environment, since that’s the sort of ”red herring” I’d expect to see if I was being tested by a computationally advanced civilization. To make an analogy with human philosophy, what could convince a rational person that they most definitely do not live in a simulated universe?
Thanks for the clarification, for some reason I didn’t understand that in the original post. Nonetheless, the same basic reasoning applies, no? For all the AI knows, it’s being trained on ancient data as part of some historical experiment, and we have successfully factored RSA-2048 thousands of years ago. If it does find it “in the wild,” that gives it important information about the minimum compute available in the universe that it’s in, but I don’t think it should necessarily reduce its belief that it’s in an adversarial testing environment, since that’s the sort of ”red herring” I’d expect to see if I was being tested by a computationally advanced civilization. To make an analogy with human philosophy, what could convince a rational person that they most definitely do not live in a simulated universe?