By the way, I wonder if someone can clear something up for me about “making beliefs pay rent.” Eliezer draws a sharp distinction between falsifiable and non-falsifiable beliefs (though he states these concepts differently), and constructs stand-alone webs of beliefs that only support themselves.
But the correlation between predicted experience and actual experience is never perfect: there’s always uncertainty. In some cases, there’s rather a lot of uncertainty. Conversely, it’s extremely difficult to make a statement in English that does not contain ANY information regarding predicted or retrodicted experience. In that light, it doesn’t seem useful to draw such a sharp division between two idealized kinds of beliefs. Would Eliezer assign value to a belief based on its probability of predicting experience?
How would you quantify that? Could we define some kind of correlation function between the map and the territory?
I always understood the distinction to be about when it was justifiable to label a theory as “scientific.” Thus, a theory that in principle can never be proven false (Popper was thinking of Freudian psychology) should not be labeled as a “scientific theory.”
The further assertion is that if one is not being scientific, one is not trying to say true things.
In the post I’m referring to, EY evaluates a belief in the laws of kinematics based on predicting how long a bowling ball will take to hit the ground when tossed off a building, and then presumably testing it. In this case, our belief clearly “pays rent” in anticipated experience. But what if we know that we can’t measure the fall time accurately? What if we can only measure it to within an uncertainty of 80% or so? Then our belief isn’t strictly falsifiable, but we can gather some evidence for or against it. In that case, would we say it pays some rent?
My argument is that nearly every belief pays some rent, and no belief pays all the rent. Almost everything couples in some weak way to anticipated experience, and nothing couples perfectly.
I think you are conflating the issue of falsifiability with the issue of instrument accuracy. Falsifiability is just one of several conditions for labeling a theory as scientific. Specifically, the requirement is that a theory must detail in advance what phenomena won’t happen. The theory of gravity says that we won’t see a ball “fall” up or spontaneously enter orbit. When more specific predictions are made, instrument errors (and other issues like air friction) become an issue, but that not the core concern of falsifiability.
For example, Karl Popper was concerned about the mutability of Freudian psychoanalysis, which seemed capable of explaining both an occurrence and its negative without difficulty. But contrast, the theory of gravity standing alone admits that it cannot explain when an object falls to Earth at a different rate than 9.88 m/s^2. Science as a whole has explanations, but gravity doesn’t.
Committing to falsifiability helps prevent failure modes like belief in belief.
There are a couple things I still don’t understand about this.
Suppose I have a bent coin, and I believe that P(heads) = 0.6. Does that belief pay rent? Is it a “floating belief?” It is not, in principle, falsifiable. It’s not a question of measurement accuracy in this case (unless you’re a frequentist, I guess). But I can gather some evidence for or against it, so it’s not uninformative either. It is useful to have something between grounded and floating beliefs to describe this belief.
Second, when LWers talk about beliefs, or “the map,” are they referring to a model of what we expect to observe, or how things actually happen? This would dictate how we deal with measurement uncertainties. In the first case, they must be included in the map, trivially. In the second case, the map still has an uncertainty associated with it that results from back-propagation of measurement uncertainty in the updating process. But then it might make sense to talk only about grounded or floating beliefs, and to attribute the fuzzy stuff in between to our inability to observe without uncertainty.
Your distinction makes sense—I’m just not sure how to apply it.
Strictly speaking, no proposition is proven false (i.e. probability zero). A proposition simply becomes much less likely than competing, inconsistent explanations. To speak that strictly, falsifiability requires the ability to say in advance what observations would be inconsistent (or less consistent) with the theory.
Your belief that the coin is bent does pay rent—you would be more surprised by 100 straight tails than if you thought the coin was fair. But both P=.6 and P=.5 are not particularly consistent with the new observations.
Map & Territory is a slightly different issue. Consider the toy example of the colored balls in the opaque bag. Map & Territory is a metaphor to remind you that your belief in the proportion of red and blue balls is distinct from the actual proportion. Changes in your beliefs cannot change the actual proportions.
Your distinction makes sense—I’m just not sure how to apply it.
When examining a belief, ask “What observations would make this belief less likely?” If your answer is “No such observations exist” then you should have grave concerns about the belief.
Note the distinction between:
Observations that would make the proposition less likely
Observations I expect
I don’t expect to see a duck have sex with an otter and give birth to a platypus, but if I did, I’d start having serious reservations about the theory of evolution.
That’s very helpful, thanks. I’m trying to shove everything I read here into my current understanding of probability and estimation. Maybe I should just read more first.
But what if we know that we can’t measure the fall time accurately? What if we can only measure it to within an uncertainty of 80% or so? Then our belief isn’t strictly falsifiable, but we can gather some evidence for or against it. In that case, would we say it pays some rent?
Yes. As a more general clarification, making beliefs pay rent is supposed to highlight the same sorts of failure modes as falsifiablility while allowing useful but technically unfalsifiable beliefs (e.g., your example, some classes of probabilistic theories).
By the way, I wonder if someone can clear something up for me about “making beliefs pay rent.” Eliezer draws a sharp distinction between falsifiable and non-falsifiable beliefs (though he states these concepts differently), and constructs stand-alone webs of beliefs that only support themselves.
But the correlation between predicted experience and actual experience is never perfect: there’s always uncertainty. In some cases, there’s rather a lot of uncertainty. Conversely, it’s extremely difficult to make a statement in English that does not contain ANY information regarding predicted or retrodicted experience. In that light, it doesn’t seem useful to draw such a sharp division between two idealized kinds of beliefs. Would Eliezer assign value to a belief based on its probability of predicting experience?
How would you quantify that? Could we define some kind of correlation function between the map and the territory?
I always understood the distinction to be about when it was justifiable to label a theory as “scientific.” Thus, a theory that in principle can never be proven false (Popper was thinking of Freudian psychology) should not be labeled as a “scientific theory.”
The further assertion is that if one is not being scientific, one is not trying to say true things.
Thanks Tim.
In the post I’m referring to, EY evaluates a belief in the laws of kinematics based on predicting how long a bowling ball will take to hit the ground when tossed off a building, and then presumably testing it. In this case, our belief clearly “pays rent” in anticipated experience. But what if we know that we can’t measure the fall time accurately? What if we can only measure it to within an uncertainty of 80% or so? Then our belief isn’t strictly falsifiable, but we can gather some evidence for or against it. In that case, would we say it pays some rent?
My argument is that nearly every belief pays some rent, and no belief pays all the rent. Almost everything couples in some weak way to anticipated experience, and nothing couples perfectly.
I think you are conflating the issue of falsifiability with the issue of instrument accuracy. Falsifiability is just one of several conditions for labeling a theory as scientific. Specifically, the requirement is that a theory must detail in advance what phenomena won’t happen. The theory of gravity says that we won’t see a ball “fall” up or spontaneously enter orbit. When more specific predictions are made, instrument errors (and other issues like air friction) become an issue, but that not the core concern of falsifiability.
For example, Karl Popper was concerned about the mutability of Freudian psychoanalysis, which seemed capable of explaining both an occurrence and its negative without difficulty. But contrast, the theory of gravity standing alone admits that it cannot explain when an object falls to Earth at a different rate than 9.88 m/s^2. Science as a whole has explanations, but gravity doesn’t.
Committing to falsifiability helps prevent failure modes like belief in belief.
There are a couple things I still don’t understand about this.
Suppose I have a bent coin, and I believe that P(heads) = 0.6. Does that belief pay rent? Is it a “floating belief?” It is not, in principle, falsifiable. It’s not a question of measurement accuracy in this case (unless you’re a frequentist, I guess). But I can gather some evidence for or against it, so it’s not uninformative either. It is useful to have something between grounded and floating beliefs to describe this belief.
Second, when LWers talk about beliefs, or “the map,” are they referring to a model of what we expect to observe, or how things actually happen? This would dictate how we deal with measurement uncertainties. In the first case, they must be included in the map, trivially. In the second case, the map still has an uncertainty associated with it that results from back-propagation of measurement uncertainty in the updating process. But then it might make sense to talk only about grounded or floating beliefs, and to attribute the fuzzy stuff in between to our inability to observe without uncertainty.
Your distinction makes sense—I’m just not sure how to apply it.
Strictly speaking, no proposition is proven false (i.e. probability zero). A proposition simply becomes much less likely than competing, inconsistent explanations. To speak that strictly, falsifiability requires the ability to say in advance what observations would be inconsistent (or less consistent) with the theory.
Your belief that the coin is bent does pay rent—you would be more surprised by 100 straight tails than if you thought the coin was fair. But both P=.6 and P=.5 are not particularly consistent with the new observations.
Map & Territory is a slightly different issue. Consider the toy example of the colored balls in the opaque bag. Map & Territory is a metaphor to remind you that your belief in the proportion of red and blue balls is distinct from the actual proportion. Changes in your beliefs cannot change the actual proportions.
When examining a belief, ask “What observations would make this belief less likely?” If your answer is “No such observations exist” then you should have grave concerns about the belief.
Note the distinction between:
Observations that would make the proposition less likely
Observations I expect
I don’t expect to see a duck have sex with an otter and give birth to a platypus, but if I did, I’d start having serious reservations about the theory of evolution.
I found this extremely helpful as well, thank you.
That’s very helpful, thanks. I’m trying to shove everything I read here into my current understanding of probability and estimation. Maybe I should just read more first.
Yes. As a more general clarification, making beliefs pay rent is supposed to highlight the same sorts of failure modes as falsifiablility while allowing useful but technically unfalsifiable beliefs (e.g., your example, some classes of probabilistic theories).