Consider the case where a friend says he saw a UFO. There are two possibilities: either the friend is lying/insane/gullible, or UFOs are real (there are probably some other possibilities, but for the sake of argument let’s focus on these).
Your friend’s statement can have different effects depending on what you already believe. If either probability is already at ~100%, you have no more work to do. IE, if you’re already sure your friend is a liar, you dismiss this as yet another lie and don’t start believing in UFOs; if you’re already sure UFOs exist, you dismiss this as yet another UFO and don’t start doubting your friend.
If you’re not ~100% sure of either statement, then your observation will increase both the probability that your friend is a liar, and that aliens exist, but in different amounts. If you think your friend usually tells the truth, but you’re not sure, it will increase your probability of UFOs quite a bit (your friend wouldn’t lie to you!) but as long as you’re not going to be sure of UFOs, you also have to leave some room for the case where UFOs aren’t real, in which case the statement increases your probability that your friend is a liar.
When you hear a great argument for P, your pre-existing beliefs determine what you do in the same way as in the UFO example. It could mean that your interlocutor is a rhetorical genius so brilliant they can think up great arguments even for false positions. Or it could mean P is true. In real life, the probability of the interlocutor being such a rhetorical genius is always less than ~100%, meaning that it has to increase your probability of P at least a little.
In your example, we already know that the AI is a rhetorical genius who can create an arbitrarily good argument for anything. That totally explains away the brilliant arguments, leaving nothing left to be explained by Zoroastrianism actually being true. It’s like when your friend who is a known insane liar says he saw a UFO: the insane liar part already explains away the evidence, so even though you’re hearing words that sound like evidence, no probabilities are actually being shifted.
I understand the principle, yes. But it means if your friend is a liar, no argument he gives needs to be examined on its own merits. But what if he is a liar and he saw a UFO? What if P(he is a liar) and P(there’s a UFO) are not independent? I think if they are independent, your argument works. If they are not, it doesn’t. If UFOs appear mostly to liars, you can’t ignore his evidence. Do you agree? In my case, they are not independent: it’s easier to argue for a true proposition, even for a very intelligent AI. Here I assume that P must be strictly less than 1 always.
Consider the case where a friend says he saw a UFO. There are two possibilities: either the friend is lying/insane/gullible, or UFOs are real (there are probably some other possibilities, but for the sake of argument let’s focus on these).
Your friend’s statement can have different effects depending on what you already believe. If either probability is already at ~100%, you have no more work to do. IE, if you’re already sure your friend is a liar, you dismiss this as yet another lie and don’t start believing in UFOs; if you’re already sure UFOs exist, you dismiss this as yet another UFO and don’t start doubting your friend.
If you’re not ~100% sure of either statement, then your observation will increase both the probability that your friend is a liar, and that aliens exist, but in different amounts. If you think your friend usually tells the truth, but you’re not sure, it will increase your probability of UFOs quite a bit (your friend wouldn’t lie to you!) but as long as you’re not going to be sure of UFOs, you also have to leave some room for the case where UFOs aren’t real, in which case the statement increases your probability that your friend is a liar.
When you hear a great argument for P, your pre-existing beliefs determine what you do in the same way as in the UFO example. It could mean that your interlocutor is a rhetorical genius so brilliant they can think up great arguments even for false positions. Or it could mean P is true. In real life, the probability of the interlocutor being such a rhetorical genius is always less than ~100%, meaning that it has to increase your probability of P at least a little.
In your example, we already know that the AI is a rhetorical genius who can create an arbitrarily good argument for anything. That totally explains away the brilliant arguments, leaving nothing left to be explained by Zoroastrianism actually being true. It’s like when your friend who is a known insane liar says he saw a UFO: the insane liar part already explains away the evidence, so even though you’re hearing words that sound like evidence, no probabilities are actually being shifted.
I understand the principle, yes. But it means if your friend is a liar, no argument he gives needs to be examined on its own merits. But what if he is a liar and he saw a UFO? What if P(he is a liar) and P(there’s a UFO) are not independent? I think if they are independent, your argument works. If they are not, it doesn’t. If UFOs appear mostly to liars, you can’t ignore his evidence. Do you agree? In my case, they are not independent: it’s easier to argue for a true proposition, even for a very intelligent AI. Here I assume that P must be strictly less than 1 always.