Someone tells me Morality has falsifiable truths in it, where is the experimental test?
You are describing instrumentalism, which is an unpopular position on this forum, where most follow EY’s realism. For a realist untestable questions have answers, justified on the basis of their preferred notion of the Occam’s razor.
If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true.
Replace “moral truth” with “many worlds”, and you get the EY’s understanding of QM.
Concerns with confusing the map with the territory) are extensively discussed on this forum. If it walks like a duck and quacks like a duck, is it not instrumentalism?
The difference is whether you believe that even though it walks like a duck and quacks like a duck, it could be in fact a well-designed mechanical emulation of a duck indistinguishable from an organic duck, and then prefer the former model, because Occam’s razor!
Occam’s razor is a strategy for being a more effective instrumentalist. It may or may not be elevated to some other status, but this is at least one powerful draw that it has. Do not infer robot ducks when regular ducks will do, do not waste your efforts (instrumentality!) designing for robot ducks when your only evidence so far (razor) is ducks. Or ven more compactly in your belief: whether these ducks are “real” or “emulations,” only design for what you actually know about these ducks, not for something that takes a lot of untested message to presume about the ducks.
Do not spend a lot of time filling in the details of unreachable lands on your map.
If you accept as “true” some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don’t know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the “argument” over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.
I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn’t. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.
But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.
We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can’t tested and even for the ones that can be tested the proof is generally considered better evidence than the test.
In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.
There’s another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.
Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths (“i-can-prove-it”), and “truth-and-i-can’t-prove-it.”
Generally, this categorization scheme will put most contentious moral assertions into the third category.
This may be a situation where the modern world’s resources start to break down the formerly strong separation between mind and world.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
These modern machines seem to render the statements within axiomatic mathematical systems as testable and falsifiable as any other physical facts.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.”
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
How did instrumentalism and realism get identified as conflicting positions? There are forms of physical realism that conflict with instrumentalism—but instrumentalism is not inherently opposed to physical realism.
Not inherently, no. But the distinction is whether the notion of territory is a map (instrumentalism) or the territory (realism). It does not matter most of the time, but sometimes, like when discussing morality or quantum mechanics, is does.
A realist finds is perfectly OK to argue which of the many identical maps is “truer” to the invisible underlying territory. An instrumentalist simply notes that there is no way to resolve this question to everyone’s satisfaction.
I’m objecting to your exclusion of instrumentalism from the realist label. An anti-realist says there is no territory. That’s not necessarily the position of the instrumentalist.
An anti-realist says there is no territory. That’s not necessarily the position of the instrumentalist.
Right. Anti-realism makes an untestable and unprovable statement like this (so does anti-theism, by the way). An instrumentalist says that there is no way to tell if there is one, and that the map/territory distinction is an often useful model, so why not use it when it makes sense.
I’m objecting to your exclusion of instrumentalism from the realist label.
Well, this is an argument about labels, definitions and identities, which is rarely productive. You can either postulate that there is this territory/reality thing independent of what anyone thinks about it, or you can call it a model which works better in some cases and worse in others. I don’t really care what label you assign to each position.
Respectfully, you were the one invoking technical jargon to do some analytical work.
Without jargon: I think there is physical reality external to human minds. I think that the best science can do is make better predictions—accurately describing reality is harder.
You suggest there is unresolvable tension between those positions.
I think there is physical reality external to human minds.
It’s a useful model, yes.
I think that the best science can do is make better predictions—accurately describing reality is harder.
The assumption that “accurately describing reality” is even possible is a bad model, because you can never tell if you are done. And if it is not possible, then there is no point postulating this reality thing. Might as well avoid it and stick with something that is indisputable: it is possible to build successively better models.
You suggest there is unresolvable tension between those positions.
Yes, one of them postulates something that cannot be tested. if you are into Occam’s razor, that’s something that fails it.
We can’t talk about testing propositions against reality until we decide whether there is a reality to test it against. If you are uncertain about that point, the nuances between predicting reality and modelling reality are not on point—and probably confuse the analysis more than they shed any light.
If someone walked into one of your high-end physics lectures and wanted to talk about whether there was reality (see Cartesian doubt), I think you would tell him that the physics class was not the venue for that type of conversation. If you tried to answer his questions while also answering other students’ questions, everything would get hopelessly confused.
If the experiment is not a way to tap into reality (in some extremely metaphorical sense), why should I care about the experimental results when trying to decide whether my proposition is true?
If you want to know how far a rock you throw will land (a prediction based on a model constructed based on previously performed experiments), you want your model to have the necessary predictive power. Whether it corresponds to some metaphysical concept of reality is quite secondary.
That doesn’t answer my question. To rephrase using your new example, if the prior experiments do not metaphorically “tap into reality,” why should I have any confidence that a model based on those experimental results will be useful in predicting future events?
Well, either the experimental result has predictive power, or it doesn’t. If certain kinds of experimental results prove useful for predicting the future, then I should have confidence in predictions based on (models based on) those results. Whether I call them “reality” or “a model” doesn’t really matter very much.
More generally, to my way of thinking; this whole “instrumentalists don’t believe in reality” business mostly seems like a distinction in how we use words rather than in what experiences we anticipate.
It would potentially make a difference, I suppose, if soi-disant instrumentalists didn’t actually expect the results of different experiments to be reconcilable with one another (under the principle that each experiment was operating on its own model, after all, and there’s no reason to expect those models to have any particular relationship to one another). But for the most part, that doesn’t seem to be the case.
There’s a bit of that when it comes to quirky quantum results, I gather, but to my mind that’s kind of an “instrumentalism of the gaps”… when past researchers have come up with a unified model we accept that unified model, but when current data doesn’t seem unified given our current understanding, rather than seeking a unified model we shrug our shoulders and accept the inconsistency, because hey, they’re just models, it’s not like there’s any real underlying territory.
Which in practice just means we wait for someone else to do the hard work of reconciling it all.
why should I have any confidence that a model based on those experimental results will be useful in predicting future events?
Because it has been experimentally confirmed before, and from experience we can assign a high probability that a model that has been working well in the past will continue to work in the similar circumstances in the future.
You are describing instrumentalism, which is an unpopular position on this forum, where most follow EY’s realism. For a realist untestable questions have answers, justified on the basis of their preferred notion of the Occam’s razor.
Replace “moral truth” with “many worlds”, and you get the EY’s understanding of QM.
Concerns with confusing the map with the territory) are extensively discussed on this forum. If it walks like a duck and quacks like a duck, is it not instrumentalism?
The difference is whether you believe that even though it walks like a duck and quacks like a duck, it could be in fact a well-designed mechanical emulation of a duck indistinguishable from an organic duck, and then prefer the former model, because Occam’s razor!
Occam’s razor is a strategy for being a more effective instrumentalist. It may or may not be elevated to some other status, but this is at least one powerful draw that it has. Do not infer robot ducks when regular ducks will do, do not waste your efforts (instrumentality!) designing for robot ducks when your only evidence so far (razor) is ducks. Or ven more compactly in your belief: whether these ducks are “real” or “emulations,” only design for what you actually know about these ducks, not for something that takes a lot of untested message to presume about the ducks.
Do not spend a lot of time filling in the details of unreachable lands on your map.
Yep. Also, do not argue which of the many identical maps is better.
If you accept as “true” some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
Is this different from having higher confidence in statements for which I have more evidence?
For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don’t know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the “argument” over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.
I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn’t. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.
But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.
Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can’t tested and even for the ones that can be tested the proof is generally considered better evidence than the test.
In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.
There’s another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.
Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths (“i-can-prove-it”), and “truth-and-i-can’t-prove-it.”
Generally, this categorization scheme will put most contentious moral assertions into the third category.
Agreed except for your non-conventional use of the word “prove” which is normal restricted to things in the first category.
This may be a situation where the modern world’s resources start to break down the formerly strong separation between mind and world.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
These modern machines seem to render the statements within axiomatic mathematical systems as testable and falsifiable as any other physical facts.
How would you do this for something like the Poincare conjecture or the unaccountability of the reals?
Also how do you show that your implementation does in fact compute addition without using math?
Frankly the argument you’re trying to make is like arguing that we no longer need farms since we can get our food from supermarkets.
Edit: Also the most you can show STATISTICALLY is that the commutative law holds for most (or nearly all) examples of the size you try, whereas mathematical proofs can show that it always holds.
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
A rationalist (in the original sense of the word) would go even further requiring a logical proof, and not accepting a mere prediction as a substitute.
How did instrumentalism and realism get identified as conflicting positions? There are forms of physical realism that conflict with instrumentalism—but instrumentalism is not inherently opposed to physical realism.
Not inherently, no. But the distinction is whether the notion of territory is a map (instrumentalism) or the territory (realism). It does not matter most of the time, but sometimes, like when discussing morality or quantum mechanics, is does.
I don’t understand. Can you give an example?
A realist finds is perfectly OK to argue which of the many identical maps is “truer” to the invisible underlying territory. An instrumentalist simply notes that there is no way to resolve this question to everyone’s satisfaction.
I’m objecting to your exclusion of instrumentalism from the realist label. An anti-realist says there is no territory. That’s not necessarily the position of the instrumentalist.
Right. Anti-realism makes an untestable and unprovable statement like this (so does anti-theism, by the way). An instrumentalist says that there is no way to tell if there is one, and that the map/territory distinction is an often useful model, so why not use it when it makes sense.
Well, this is an argument about labels, definitions and identities, which is rarely productive. You can either postulate that there is this territory/reality thing independent of what anyone thinks about it, or you can call it a model which works better in some cases and worse in others. I don’t really care what label you assign to each position.
Respectfully, you were the one invoking technical jargon to do some analytical work.
Without jargon: I think there is physical reality external to human minds. I think that the best science can do is make better predictions—accurately describing reality is harder.
You suggest there is unresolvable tension between those positions.
It’s a useful model, yes.
The assumption that “accurately describing reality” is even possible is a bad model, because you can never tell if you are done. And if it is not possible, then there is no point postulating this reality thing. Might as well avoid it and stick with something that is indisputable: it is possible to build successively better models.
Yes, one of them postulates something that cannot be tested. if you are into Occam’s razor, that’s something that fails it.
We can’t talk about testing propositions against reality until we decide whether there is a reality to test it against. If you are uncertain about that point, the nuances between predicting reality and modelling reality are not on point—and probably confuse the analysis more than they shed any light.
If someone walked into one of your high-end physics lectures and wanted to talk about whether there was reality (see Cartesian doubt), I think you would tell him that the physics class was not the venue for that type of conversation. If you tried to answer his questions while also answering other students’ questions, everything would get hopelessly confused.
I never did. I talk about testing propositions against experiment, without postulating a mysterious untestable reality behind those experiments.
Unlike the model you call reality, the existence of repeatable experiments is a repeatable experimental fact.
What is an experiment but testing a proposition against reality?
That’s the realist’s approach. To me, you test a proposition with an experiment, not against anything.
If the experiment is not a way to tap into reality (in some extremely metaphorical sense), why should I care about the experimental results when trying to decide whether my proposition is true?
If you want to know how far a rock you throw will land (a prediction based on a model constructed based on previously performed experiments), you want your model to have the necessary predictive power. Whether it corresponds to some metaphysical concept of reality is quite secondary.
That doesn’t answer my question. To rephrase using your new example, if the prior experiments do not metaphorically “tap into reality,” why should I have any confidence that a model based on those experimental results will be useful in predicting future events?
Well, either the experimental result has predictive power, or it doesn’t. If certain kinds of experimental results prove useful for predicting the future, then I should have confidence in predictions based on (models based on) those results. Whether I call them “reality” or “a model” doesn’t really matter very much.
More generally, to my way of thinking; this whole “instrumentalists don’t believe in reality” business mostly seems like a distinction in how we use words rather than in what experiences we anticipate.
It would potentially make a difference, I suppose, if soi-disant instrumentalists didn’t actually expect the results of different experiments to be reconcilable with one another (under the principle that each experiment was operating on its own model, after all, and there’s no reason to expect those models to have any particular relationship to one another). But for the most part, that doesn’t seem to be the case.
There’s a bit of that when it comes to quirky quantum results, I gather, but to my mind that’s kind of an “instrumentalism of the gaps”… when past researchers have come up with a unified model we accept that unified model, but when current data doesn’t seem unified given our current understanding, rather than seeking a unified model we shrug our shoulders and accept the inconsistency, because hey, they’re just models, it’s not like there’s any real underlying territory.
Which in practice just means we wait for someone else to do the hard work of reconciling it all.
Because it has been experimentally confirmed before, and from experience we can assign a high probability that a model that has been working well in the past will continue to work in the similar circumstances in the future.