Then mathematician’s uncertainty was such that they would have causal graphs with TSW causing FLT.
Well the direction of the arrow would be unspecified. After all, not FLT implies not TSW is equivalent to TSW implies FLT, so there’s a symmetry here. This often happens in causal modelling; many causal discovery algorithms can output that they know an arrow exists, but they are unable to determine its direction.
Also, conjectures are the causes of their proofs rather than vice versa. You can see this as your degrees of belief in the correctness of purported proofs are independent given that the conjecture is true (or false), but dependent when the truth-value of the conjecture is unknown.
Apart from this detail, I agree with your comment and I find it to be similar to the way I think about the causal structure of math.
This is very different from how I think about it. Could you expand a little? What do you mean by “when the truth-value of the conjecture is unknown”? That neither C nor ¬C is in your bound agent’s store of known theorems?
your degrees of belief in the correctness of purported proofs are independent given that the conjecture is true (or false),
Let S1, S2 be purported single-conclusion proofs of a statement C.
If I know C is false, the purported proofs are trivially independent because they’re fully determined incorrect?
Why is S1 independent of S2 given C is true? Are you saying that learning S2⊢C puts C in our theorem bank, and knowing C is true can change our estimation that S1⊢C , but proofs aren’t otherwise mutually informative? If so, what is the effect of learning ⊨C on P(S1⊢C)? And why don’t you consider proofs which, say, only differ after the first n steps to be dependent, even given the truth of their shared conclusion?
What do you mean by “when the truth-value of the conjecture is unknown”? That neither C nor ¬C is in your bound agent’s store of known theorems?
I meant that the agent is in some state of uncertainty. I’m trying to contrast the case where we are more certain of either C or ¬C with that where we have a significant degree of uncertainty.
If I know C is false, the purported proofs are trivially independent because they’re fully determined incorrect?
Yeah, this is just the trivial case.
Why is S1 independent of S2 given C is true?
I was talking about the simple case where there are no other causal links between the two proofs, like common lemmas or empirical observations. Those do change the causal structure by adding extra nodes and arrows, but I was making the simplifying assumption that we don’t have those things.
Well the direction of the arrow would be unspecified. After all, not FLT implies not TSW is equivalent to TSW implies FLT, so there’s a symmetry here. This often happens in causal modelling; many causal discovery algorithms can output that they know an arrow exists, but they are unable to determine its direction.
Also, conjectures are the causes of their proofs rather than vice versa. You can see this as your degrees of belief in the correctness of purported proofs are independent given that the conjecture is true (or false), but dependent when the truth-value of the conjecture is unknown.
Apart from this detail, I agree with your comment and I find it to be similar to the way I think about the causal structure of math.
This is very different from how I think about it. Could you expand a little? What do you mean by “when the truth-value of the conjecture is unknown”? That neither C nor ¬C is in your bound agent’s store of known theorems?
Let S1, S2 be purported single-conclusion proofs of a statement C.
If I know C is false, the purported proofs are trivially independent because they’re fully determined incorrect?
Why is S1 independent of S2 given C is true? Are you saying that learning S2⊢C puts C in our theorem bank, and knowing C is true can change our estimation that S1⊢C , but proofs aren’t otherwise mutually informative? If so, what is the effect of learning ⊨C on P(S1⊢C)? And why don’t you consider proofs which, say, only differ after the first n steps to be dependent, even given the truth of their shared conclusion?
I meant that the agent is in some state of uncertainty. I’m trying to contrast the case where we are more certain of either C or ¬C with that where we have a significant degree of uncertainty.
Yeah, this is just the trivial case.
I was talking about the simple case where there are no other causal links between the two proofs, like common lemmas or empirical observations. Those do change the causal structure by adding extra nodes and arrows, but I was making the simplifying assumption that we don’t have those things.
Hmm, you are right. Thanks for the correction!