I think you are conflating the issue of falsifiability with the issue of instrument accuracy. Falsifiability is just one of several conditions for labeling a theory as scientific. Specifically, the requirement is that a theory must detail in advance what phenomena won’t happen. The theory of gravity says that we won’t see a ball “fall” up or spontaneously enter orbit. When more specific predictions are made, instrument errors (and other issues like air friction) become an issue, but that not the core concern of falsifiability.
For example, Karl Popper was concerned about the mutability of Freudian psychoanalysis, which seemed capable of explaining both an occurrence and its negative without difficulty. But contrast, the theory of gravity standing alone admits that it cannot explain when an object falls to Earth at a different rate than 9.88 m/s^2. Science as a whole has explanations, but gravity doesn’t.
Committing to falsifiability helps prevent failure modes like belief in belief.
There are a couple things I still don’t understand about this.
Suppose I have a bent coin, and I believe that P(heads) = 0.6. Does that belief pay rent? Is it a “floating belief?” It is not, in principle, falsifiable. It’s not a question of measurement accuracy in this case (unless you’re a frequentist, I guess). But I can gather some evidence for or against it, so it’s not uninformative either. It is useful to have something between grounded and floating beliefs to describe this belief.
Second, when LWers talk about beliefs, or “the map,” are they referring to a model of what we expect to observe, or how things actually happen? This would dictate how we deal with measurement uncertainties. In the first case, they must be included in the map, trivially. In the second case, the map still has an uncertainty associated with it that results from back-propagation of measurement uncertainty in the updating process. But then it might make sense to talk only about grounded or floating beliefs, and to attribute the fuzzy stuff in between to our inability to observe without uncertainty.
Your distinction makes sense—I’m just not sure how to apply it.
Strictly speaking, no proposition is proven false (i.e. probability zero). A proposition simply becomes much less likely than competing, inconsistent explanations. To speak that strictly, falsifiability requires the ability to say in advance what observations would be inconsistent (or less consistent) with the theory.
Your belief that the coin is bent does pay rent—you would be more surprised by 100 straight tails than if you thought the coin was fair. But both P=.6 and P=.5 are not particularly consistent with the new observations.
Map & Territory is a slightly different issue. Consider the toy example of the colored balls in the opaque bag. Map & Territory is a metaphor to remind you that your belief in the proportion of red and blue balls is distinct from the actual proportion. Changes in your beliefs cannot change the actual proportions.
Your distinction makes sense—I’m just not sure how to apply it.
When examining a belief, ask “What observations would make this belief less likely?” If your answer is “No such observations exist” then you should have grave concerns about the belief.
Note the distinction between:
Observations that would make the proposition less likely
Observations I expect
I don’t expect to see a duck have sex with an otter and give birth to a platypus, but if I did, I’d start having serious reservations about the theory of evolution.
That’s very helpful, thanks. I’m trying to shove everything I read here into my current understanding of probability and estimation. Maybe I should just read more first.
I think you are conflating the issue of falsifiability with the issue of instrument accuracy. Falsifiability is just one of several conditions for labeling a theory as scientific. Specifically, the requirement is that a theory must detail in advance what phenomena won’t happen. The theory of gravity says that we won’t see a ball “fall” up or spontaneously enter orbit. When more specific predictions are made, instrument errors (and other issues like air friction) become an issue, but that not the core concern of falsifiability.
For example, Karl Popper was concerned about the mutability of Freudian psychoanalysis, which seemed capable of explaining both an occurrence and its negative without difficulty. But contrast, the theory of gravity standing alone admits that it cannot explain when an object falls to Earth at a different rate than 9.88 m/s^2. Science as a whole has explanations, but gravity doesn’t.
Committing to falsifiability helps prevent failure modes like belief in belief.
There are a couple things I still don’t understand about this.
Suppose I have a bent coin, and I believe that P(heads) = 0.6. Does that belief pay rent? Is it a “floating belief?” It is not, in principle, falsifiable. It’s not a question of measurement accuracy in this case (unless you’re a frequentist, I guess). But I can gather some evidence for or against it, so it’s not uninformative either. It is useful to have something between grounded and floating beliefs to describe this belief.
Second, when LWers talk about beliefs, or “the map,” are they referring to a model of what we expect to observe, or how things actually happen? This would dictate how we deal with measurement uncertainties. In the first case, they must be included in the map, trivially. In the second case, the map still has an uncertainty associated with it that results from back-propagation of measurement uncertainty in the updating process. But then it might make sense to talk only about grounded or floating beliefs, and to attribute the fuzzy stuff in between to our inability to observe without uncertainty.
Your distinction makes sense—I’m just not sure how to apply it.
Strictly speaking, no proposition is proven false (i.e. probability zero). A proposition simply becomes much less likely than competing, inconsistent explanations. To speak that strictly, falsifiability requires the ability to say in advance what observations would be inconsistent (or less consistent) with the theory.
Your belief that the coin is bent does pay rent—you would be more surprised by 100 straight tails than if you thought the coin was fair. But both P=.6 and P=.5 are not particularly consistent with the new observations.
Map & Territory is a slightly different issue. Consider the toy example of the colored balls in the opaque bag. Map & Territory is a metaphor to remind you that your belief in the proportion of red and blue balls is distinct from the actual proportion. Changes in your beliefs cannot change the actual proportions.
When examining a belief, ask “What observations would make this belief less likely?” If your answer is “No such observations exist” then you should have grave concerns about the belief.
Note the distinction between:
Observations that would make the proposition less likely
Observations I expect
I don’t expect to see a duck have sex with an otter and give birth to a platypus, but if I did, I’d start having serious reservations about the theory of evolution.
I found this extremely helpful as well, thank you.
That’s very helpful, thanks. I’m trying to shove everything I read here into my current understanding of probability and estimation. Maybe I should just read more first.