Adding some structure to this hypothetical: At time t=0, Bob and Daisy have certain priors for their beliefs on sorcery, which they have not adjusted for this argument. Bob’s belief was Position 1, with reduced strength, and Daisy’s was Position 3, with greater strength.
I’ll call your argument A0.
At time t=1, Bob and Daisy are both made aware of A0 and its implications for adjusting their beliefs. They update; Bob’s belief in 1 increases, and Daisy’s belief in 3 decreases.
More arguments:
A1: If Position 1 is true, then Bright is likely to cause you to increase your belief in him, therefore increasing your belief in Bright is evidence for Position 1.
A1′: Corollary: Decreasing your belief in Bright is evidence against Position 1.
A2:If Position 3 is true, then Dark is likely to cause you to decrease your belief in him, therefore decreasing your belief in Dark is evidence for Position 3.
A2′: Corollary: Increasing your belief in Dark is evidence against Position 3.
At time t=2, Bob and Daisy are exposed to A1 and A2, and their converses A1′ and A2′. If they believe these, they should both increase credence for Positions 1 and 3, following A1 and A2, then increase credence for Position 1 and decrease it for Position 3, following A1 and A2′, then follow A1 and A2 again, etc. This might be difficult to resolve, as you mention in your first question.
However, there is a simple reason to reject A1 and A2: Their influence is totally screened off! Bob and Daisy know why they revised their beliefs, and it was because of the valid argument A0. Unless Bright and Dark can affect the apparent validity of logical arguments (in which case your thoughts can’t be trusted anyway), A0 is valid independent of which position is true. This action moves them to begin a feedback loop, but stop after a single iteration.
There is a valid reason they might want to continue a weaker loop.
A3: That you have encountered A0 is evidence for the sorcerers whose goals are served by having you be influenced by A0.
A3′: That you have encountered A3 is evidence for the sorcerers whose goals are served by having you be influenced by A3.
A3″: That you have encountered A3″ is evidence for the sorcerers whose goals are served by having you be influenced by A3″.
etc.
But this is only true if they didn’t reason out A0 or A3 for themselves, and even then A3′, A3″, etc. should be considered obvious implications of A3 for a well-reasoned thinker. (In fact, A3 is properly more like “That you have encountered a valid argument is evidence for the sorcerers whose goals are served by having you be influenced by that argument.”) So that adds at most one more layer, barring silly Tortoise-and-Achilles arguments.
Given all that, for your second question, you still should take their beliefs into account, but possibly to a slightly lesser degree.
A point I’m confused on: when you, based on A0, update based on their A0-updated belief, are you double-counting A0? If so, you should update to lesser degree. But is that so?
I don’t think I completely follow everything you say, but let’s take a concrete case. Suppose I believe that Dark is extremely powerful and clever and wishes to convince me he doesn’t exist. I think you can conclude from this that if I believe he exists, he can’t possibly exist (because he’d find a way to convince me otherwise), so I conclude he can’t exist (or at least the probability is very low). Now I’ve convinced myself he doesn’t exist. But maybe that’s how he operates! So I have new evidence that he does in fact exist. I think there’s some sort of paradox in this situation. You can’t say that this evidence is screened off, since I haven’t considered the result of my reasoning until I have arrived at it. It seems to me that your belief oscillates between 2 numbers, or else your updates get smaller and you converge to some number in between.
The argument goes something like this: “I refuse to prove that I exist,’” says God, “for proof denies faith, and without faith I am nothing.” “But,” says Man, “The Babel fish is a dead giveaway, isn’t it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don’t. QED.” ”Oh dear,” says God, “I hadn’t thought of that,” and promptly vanishes in a puff of logic. ”Oh, that was easy,” says Man, and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing.”
The scenario you outlined is exactly the same as Daisy’s half of the piece ending in A3. The result of your reasoning isn’t further evidence, it’s screened off by the fact that it’s your reasoning, and not the actions of an outside force.
Adding some structure to this hypothetical: At time t=0, Bob and Daisy have certain priors for their beliefs on sorcery, which they have not adjusted for this argument. Bob’s belief was Position 1, with reduced strength, and Daisy’s was Position 3, with greater strength.
I’ll call your argument A0.
At time t=1, Bob and Daisy are both made aware of A0 and its implications for adjusting their beliefs. They update; Bob’s belief in 1 increases, and Daisy’s belief in 3 decreases.
More arguments:
A1: If Position 1 is true, then Bright is likely to cause you to increase your belief in him, therefore increasing your belief in Bright is evidence for Position 1.
A1′: Corollary: Decreasing your belief in Bright is evidence against Position 1.
A2:If Position 3 is true, then Dark is likely to cause you to decrease your belief in him, therefore decreasing your belief in Dark is evidence for Position 3.
A2′: Corollary: Increasing your belief in Dark is evidence against Position 3.
At time t=2, Bob and Daisy are exposed to A1 and A2, and their converses A1′ and A2′. If they believe these, they should both increase credence for Positions 1 and 3, following A1 and A2, then increase credence for Position 1 and decrease it for Position 3, following A1 and A2′, then follow A1 and A2 again, etc. This might be difficult to resolve, as you mention in your first question.
However, there is a simple reason to reject A1 and A2: Their influence is totally screened off! Bob and Daisy know why they revised their beliefs, and it was because of the valid argument A0. Unless Bright and Dark can affect the apparent validity of logical arguments (in which case your thoughts can’t be trusted anyway), A0 is valid independent of which position is true. This action moves them to begin a feedback loop, but stop after a single iteration.
There is a valid reason they might want to continue a weaker loop.
A3: That you have encountered A0 is evidence for the sorcerers whose goals are served by having you be influenced by A0.
A3′: That you have encountered A3 is evidence for the sorcerers whose goals are served by having you be influenced by A3.
A3″: That you have encountered A3″ is evidence for the sorcerers whose goals are served by having you be influenced by A3″.
etc.
But this is only true if they didn’t reason out A0 or A3 for themselves, and even then A3′, A3″, etc. should be considered obvious implications of A3 for a well-reasoned thinker. (In fact, A3 is properly more like “That you have encountered a valid argument is evidence for the sorcerers whose goals are served by having you be influenced by that argument.”) So that adds at most one more layer, barring silly Tortoise-and-Achilles arguments.
Given all that, for your second question, you still should take their beliefs into account, but possibly to a slightly lesser degree.
A point I’m confused on: when you, based on A0, update based on their A0-updated belief, are you double-counting A0? If so, you should update to lesser degree. But is that so?
I don’t think I completely follow everything you say, but let’s take a concrete case. Suppose I believe that Dark is extremely powerful and clever and wishes to convince me he doesn’t exist. I think you can conclude from this that if I believe he exists, he can’t possibly exist (because he’d find a way to convince me otherwise), so I conclude he can’t exist (or at least the probability is very low). Now I’ve convinced myself he doesn’t exist. But maybe that’s how he operates! So I have new evidence that he does in fact exist. I think there’s some sort of paradox in this situation. You can’t say that this evidence is screened off, since I haven’t considered the result of my reasoning until I have arrived at it. It seems to me that your belief oscillates between 2 numbers, or else your updates get smaller and you converge to some number in between.
LOL.
The argument goes something like this: “I refuse to prove that I exist,’” says God, “for proof denies faith, and without faith I am nothing.”
“But,” says Man, “The Babel fish is a dead giveaway, isn’t it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don’t. QED.”
”Oh dear,” says God, “I hadn’t thought of that,” and promptly vanishes in a puff of logic.
”Oh, that was easy,” says Man, and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing.”
The scenario you outlined is exactly the same as Daisy’s half of the piece ending in A3. The result of your reasoning isn’t further evidence, it’s screened off by the fact that it’s your reasoning, and not the actions of an outside force.