More generally, for me to expect your beliefs to correlate with reality, I have to either think that reality is the cause of your beliefs, expect your beliefs to alter reality, or believe that some third factor is influencing both of them.
I can construct examples where for this to be true requires us to treat mathematical truths as causes. Of course, this causes problems for the Bayesian definition of “cause”.
Yes. An argument similar to this should still be in the other-edited version of my unfinished TDT paper, involving a calculator on Venus and a calculator on Mars, the point being that if you’re not logically omniscient then you need to factor out logical uncertainty for the Markov property to hold over your causal graphs, because physically speaking, all common causes should’ve been screened off by observing the calculators’ initial physical states on Earth. Of course, it doesn’t follow that we have to factor out logical uncertainty as a causal node that works like every other causal node, but we’ve got to factor it out somehow.
My point is more general than this. Namely, that a calculator on Earth and a calculator made by aliens in the Andromeda galaxy would correspond despite humans and the Andromedeans never having had any contact.
You seem to be confusing the causal arrow with the logical arrow. As endoself points out here proofs logically imply their theorems, but a theorem causes its proof.
Can you provide an example? I would claim that for any model in which you have a mathematical truth as a node in a causal graph, you can replace that node by whatever series of physical events caused you to believe that mathematical truth.
I add 387+875 to get 1262, from this I can conclude that anyone else doing the same computation will get the same answer despite never having interacted with them.
Inasmuch as you have stipulated that “performing the same calculation” means “perforing the same
calculation correcly”, rahter than something like “launching the same algorithm but possibly crashing”,
your statement is tautologous. In fact, it isa special case of the general statement that anyone
succesfully performing a calculation will get the same result as everyone else. But why woud
you want to use a causal diagrtam to represent a tuatlotlogy? The two have different properties. Causal diagrams have <1.0 transition probabilities, which tautologies don’t. Tautologies have concpetually intelligible relationships between their parts, which causal diagrams don’t.
Observe that your two objections cancel each other out. If someone performs the same calculation, there is a significant (but <1.0) chance that it will be done correctly.
What has that to do with mathemmatica truth? You might as well say that if someone follows the same recipe
there e is a significant chance that the same dish will be produced. Inasmuch as you are takling about
someting that can haphazardly fail, you are not talking about mathematical truth.
Your prediction is a prediction of what someone else will conclude, given a set of initial conditions (the mathematical problem) and a set of rules to apply to these conditions. The conclusion that you arrive at is a causal descendant of the problem and the rules of mathematics; the conclusion that the other person arrives at is a causal descendant of the same initial problem and the same rules.
The point of having the node is to have a common cause of person X’s beliefs about mathematics and person Y’s beliefs about mathematics that explains why these two beliefs are correlated even if both discovered said mathematics interdependently.
Yes it does. In this case said truth even has a physical manifestation, i.e., as the crossword-writer’s solution as it exists in some combination of his head and his notes which is causal to the form of the crossword the solver sees.
It only has a physical manifestation. Cruciverbial Truth only summarises what could have been arrived at by a massively fine-grained examinination of the crossword-solver’s neurology. It doesn’t have causal powers of its
own. Its redundant in relation to physics.
Mathematical truths do behave like causes. Remember, Bayesian probabilities represent subjective uncertainty. Yes, my uncertainty about the Riemann hypothesis is correlated with my uncertainty about other mathematical facts is the same way that my uncertainty about some physical facts is correlated with my uncertainty about others, so I can represent them both as Bayesian networks (really, one big Bayesian network, as my uncertainty about math is also correlated with my uncertainty about the world).
I can construct examples where for this to be true requires us to treat mathematical truths as causes. Of course, this causes problems for the Bayesian definition of “cause”.
Yes. An argument similar to this should still be in the other-edited version of my unfinished TDT paper, involving a calculator on Venus and a calculator on Mars, the point being that if you’re not logically omniscient then you need to factor out logical uncertainty for the Markov property to hold over your causal graphs, because physically speaking, all common causes should’ve been screened off by observing the calculators’ initial physical states on Earth. Of course, it doesn’t follow that we have to factor out logical uncertainty as a causal node that works like every other causal node, but we’ve got to factor it out somehow.
My point is more general than this. Namely, that a calculator on Earth and a calculator made by aliens in the Andromeda galaxy would correspond despite humans and the Andromedeans never having had any contact.
Is there some reason not to treat logical stuff as normal causal nodes? Does that cause us actual trouble, or is it just a bit confusing sometimes?
In causal models, we can have A → B, E → A, E → ~B. Logical uncertainty does not seem offhand to have the same structure as causal uncertainty.
You seem to be confusing the causal arrow with the logical arrow. As endoself points out here proofs logically imply their theorems, but a theorem causes its proof.
Can you provide an example? I would claim that for any model in which you have a mathematical truth as a node in a causal graph, you can replace that node by whatever series of physical events caused you to believe that mathematical truth.
I add 387+875 to get 1262, from this I can conclude that anyone else doing the same computation will get the same answer despite never having interacted with them.
You can’t conclude that unless you are aware of the contingent fact that they are capable of getting the answer right.
“The same computation” doesn’t cover that?
Why would you want a mathematical truth on a causal graph? Are the transation probabilities ever going to be less than 1.0?
The transition probabilities from the mathematical truth on something non-mathematical will certainly be less than 1.0.
And the transition probabilities to a truth will be 1.0. So why write it in? It would be like sprinkiling a circuit diagram with zero ohm resistors.
Because otherwise the statement I quoted in the great-great-grandparent becomes false.
Inasmuch as you have stipulated that “performing the same calculation” means “perforing the same calculation correcly”, rahter than something like “launching the same algorithm but possibly crashing”, your statement is tautologous. In fact, it isa special case of the general statement that anyone succesfully performing a calculation will get the same result as everyone else. But why woud you want to use a causal diagrtam to represent a tuatlotlogy? The two have different properties. Causal diagrams have <1.0 transition probabilities, which tautologies don’t. Tautologies have concpetually intelligible relationships between their parts, which causal diagrams don’t.
Observe that your two objections cancel each other out. If someone performs the same calculation, there is a significant (but <1.0) chance that it will be done correctly.
What has that to do with mathemmatica truth? You might as well say that if someone follows the same recipe there e is a significant chance that the same dish will be produced. Inasmuch as you are takling about someting that can haphazardly fail, you are not talking about mathematical truth.
I can predict what someone else will conclude, without any causal relationship, in the conventional sense, between us.
Your prediction is a prediction of what someone else will conclude, given a set of initial conditions (the mathematical problem) and a set of rules to apply to these conditions. The conclusion that you arrive at is a causal descendant of the problem and the rules of mathematics; the conclusion that the other person arrives at is a causal descendant of the same initial problem and the same rules.
That’s the causal link.
That’s my point. Specifically, that one should have nodes in one’s causal diagram for mathematical truths, what you called “rules of mathematics”.
Surely the node should be “person X was taught basic mathematics”, and not mathematics itself?
The point of having the node is to have a common cause of person X’s beliefs about mathematics and person Y’s beliefs about mathematics that explains why these two beliefs are correlated even if both discovered said mathematics interdependently.
What has that to do with any causal powers of mathematical truth?
If you what your causal graph to have the property I quoted here, you need to add nodes for mathematical truths.
Two people can arrive at the same solution to a crossword, but that does not mean there is a Cruciverbial Truth that has causal powers.
Yes it does. In this case said truth even has a physical manifestation, i.e., as the crossword-writer’s solution as it exists in some combination of his head and his notes which is causal to the form of the crossword the solver sees.
It only has a physical manifestation. Cruciverbial Truth only summarises what could have been arrived at by a massively fine-grained examinination of the crossword-solver’s neurology. It doesn’t have causal powers of its own. Its redundant in relation to physics.
Mathematical truths do behave like causes. Remember, Bayesian probabilities represent subjective uncertainty. Yes, my uncertainty about the Riemann hypothesis is correlated with my uncertainty about other mathematical facts is the same way that my uncertainty about some physical facts is correlated with my uncertainty about others, so I can represent them both as Bayesian networks (really, one big Bayesian network, as my uncertainty about math is also correlated with my uncertainty about the world).