If you accept as “true” some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don’t know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the “argument” over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.
I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn’t. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.
But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.
We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can’t tested and even for the ones that can be tested the proof is generally considered better evidence than the test.
In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.
There’s another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.
Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths (“i-can-prove-it”), and “truth-and-i-can’t-prove-it.”
Generally, this categorization scheme will put most contentious moral assertions into the third category.
This may be a situation where the modern world’s resources start to break down the formerly strong separation between mind and world.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
These modern machines seem to render the statements within axiomatic mathematical systems as testable and falsifiable as any other physical facts.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.”
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
If you accept as “true” some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
Is this different from having higher confidence in statements for which I have more evidence?
For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don’t know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the “argument” over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.
I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn’t. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.
But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.
Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can’t tested and even for the ones that can be tested the proof is generally considered better evidence than the test.
In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.
There’s another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.
Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths (“i-can-prove-it”), and “truth-and-i-can’t-prove-it.”
Generally, this categorization scheme will put most contentious moral assertions into the third category.
Agreed except for your non-conventional use of the word “prove” which is normal restricted to things in the first category.
This may be a situation where the modern world’s resources start to break down the formerly strong separation between mind and world.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
These modern machines seem to render the statements within axiomatic mathematical systems as testable and falsifiable as any other physical facts.
How would you do this for something like the Poincare conjecture or the unaccountability of the reals?
Also how do you show that your implementation does in fact compute addition without using math?
Frankly the argument you’re trying to make is like arguing that we no longer need farms since we can get our food from supermarkets.
Edit: Also the most you can show STATISTICALLY is that the commutative law holds for most (or nearly all) examples of the size you try, whereas mathematical proofs can show that it always holds.
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
A rationalist (in the original sense of the word) would go even further requiring a logical proof, and not accepting a mere prediction as a substitute.