So reading about something in a book is a sensory experience now? I beg to differ. A sensory experience of The Crusades would be witnessing them first hand. The sensory experience of reading about them is perceiving patterns of ink on a piece of paper.
DP
Edit: Also, I think that RobinZ didn’t state that as something that she believed, she stated that as something that she believed the OP meant. It’s that subjective interpretation of his position that I didn’t want to debate. If you wish to adapt that position as your own and debate its substance, we certainly can.
What’s important isn’t the number of degrees of removal, but that the belief’s being true corresponds to different expected sensory experiences of any kind at all than its being false. The sensory experience of perceiving patterns of ink on a piece of paper counts.
Now you could say: “reading about the Crusades in history books is strong evidence that ‘the Crusades happened’ is the current academic consensus,” and you could hypothesize that the academic consensus was wrong. This further hypothesis would lead to further expected sensory data—for instance, examining the documents cited by historians and finding that they must have been forgeries, or whatever.
If you adapt that position, then the belief in ghosts for instance will result in the sensory experience of reading or hearing about them, no? Can you then point to ANY belief that doesn’t result in a sensory experience other than something that you make up yourself out of thin air?
If the concept of sensory experience is to have any meaning at all, you can’t just extrapolate it as you see fit. If you can’t see, hear, smell, taste, or touch an object directly, you have not had sensory experience with that object. That does not mean that that object does not exist though.
So reading about something in a book is a sensory experience now? I beg to differ.
You are disputing definitions. Reading something in a book is a sort of thing you’d change expectation about depending on your model of the world, as are any other observations. If your beliefs influence your expectation about observations, they are part of your model of reality. On the other hand, if they don’t, they are sometimes too part of your model of reality, but it’s a more subtle point.
And returning to your earlier concerns, consider me having a special insight into the intended meaning, and proving counterexample to the impossibility of continuing the discussion. Reading something in a history book definitely counts as anticipated experience.
Very interesting read on disputing definitions. While the solution proposed there is very clever and elegant, this particular discussion is complicated by the fact that we’re discussing the statements of a person who is not currently participating. Coming up with alternate words to describe our ideas of what “sensory experience” means does nothing to help us understand what he meant by it. Incidentally this is why I didn’t want to get drawn into this debate to begin with.
Also—“consider me having a special insight into the intended meaning”—on what grounds shall I consider your having such special insight?
Fair enough. So if, on your authority, the OP believes that reading about something is anticipated experience, does that not then cover every rumor, fairy tale, and flat out non-sense that has ever been written? What then would be an example of a belief that CANNOT be connected to an “anticipated experience”?
I agree wholeheartedly that there are valid beliefs that don’t translate into anticipated experience. As a matter of fact what’s written there was pretty much the exact point that I was trying to make with my very first response in this topic.
Does that not, however, contradict the OP’s assertion that “Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.”? That’s what I took issue with to begin with.
It does contradict that assertion, but not at first approximation, and not in the sense you took the issue with. You have to be very careful if a belief doesn’t translate into anticipated experience. Beliefs about historical facts that don’t translate into anticipated experience (or don’t follow from past experience, that is observations) are usually invalid.
You seem to place a good deal of value on the concept of anticipated experience, but you give it a definition that’s so broad that the overwhelming majority of beliefs will meet the criteria. If the belief in ghosts for instance can lead to the anticipated experience of reading about them in a book, what validity does the notion have as a means of evaluating beliefs?
When a belief (hypothesis) is about reality, it responds to new evidence, or arguments about previously known evidence. It’s reasonable to expect that as a result, some beliefs will turn out incorrect, and some certainly correct. Either way it’s not a problem: you do learn things about the world as a result, whatever the conclusion. You learn that there are no ghosts, but there are rainbows.
The problem are the beliefs that purport to be speaking about reality, but really don’t, and so you become deceived by them. Not being connected to reality through anticipated experience, they take your attention where there is no use for them, influence your decisions for no good reason, and protect themselves by ignoring any knowledge about the world you obtain.
It is a great heuristic to treat any beliefs that don’t translate into anticipated experience with utmost suspicion, or even to run away from them in horror.
How would you learn that there are no ghosts? You form the belief “there are ghosts” which leads to the anticipated experience (by your definition of such) that “I will read about ghosts in a book”, you go and read about ghosts in a book. Criteria met, belief validated. Same goes for UFOs, psychics, astrology etc. What value does the concept of anticipated experience have if it fails to filter out even the most common fallacious beliefs?
That there are books about ghosts is evidence for ghosts existing (but also for lots of other things). There are also arguments against this hypothesis, both a priori and observational. A good model/theory also explains why you’d read about ghosts even though there is no such thing.
You’re not addressing my core point though. If the criteria of anticipated experience as you define it is as likely to be satisfied by fallacious beliefs as it is by valid ones, what purpose does it serve?
I addressed that question in this comment; if something is unclear, ask away. The difference is between a belief that is incorrect, and a belief that is not even wrong.
Alright, I think I see what you’re getting it, but I still can’t help but think that your definition of sensory experience is too broad to be really useful. I mean the only type of belief that it seems to filter out is absolute nonsense like “I have a third leg that I can never see or feel”, did I get that about right?
I mean the only type of belief that it seems to filter out is absolute nonsense like “I have a third leg that I can never see or feel”, did I get that about right?
Yes. It happens all the time. It’s one way nonsense protects itself, to persist for a long time in minds of individual people and cultures.
With a bayesian twist: things don’t actually get falsified, don’t become wrong with absolute certainty, rather observations can adjust your level of belief.
Ok, I understand what you mean now. Now that you’ve clarified what Eliezer meant by anticipated experience my original objection to it is no longer applicable. Thank you for an interesting and thought provoking discussion.
Slightly OT, but this relates to something that really bugs me. People often bring up the importance of statistical analysis and the possibility of flukes/lab error, in order to prove that, “Popper was totally wrong, we get to completely ignore him and this out-dated, long-refuted notion of falsifiability.”
But the way I see it, this doesn’t refute Popper, or the notion of falsifiability: it just means we’ve generalized the notion to probabilistic cases, instead of just the binary categorization of “unfalsified” vs. “falsified”. This seems like an extension of Popper/falsifiability rather than a refutation of it. Go fig.
I reached much clearer understanding once I’ve peeled away the structure of probability measure and got down to mathematically crisp events on sample spaces (classes of possible worlds). From this perspective, there are falsifiable concepts, but they usually don’t constitute useful statements, so we work with the ones that can’t be completely falsified, even though parts of them (some of the possible worlds included in them) do get falsified all the time, when you observe something.
Isn’t that like saying we’ve generalized the theory that “all is fire” to cases where the universe is only part fire? If falsification is absolute then Popper’s insight that “all is falsification” is just plain wrong; if falsification is probabilistic then surely the relevant ideas existed before Popper as probability theory. It’s not like Popper invented the notion that if a hypothesis is falsified we shouldn’t believe it.
Falsifiability can be quantified, in bits. If the only test you have for whether something’s true or not is something lame like whether it appears in stories or not, then you have a tiny amount of falsifiability. If there is a large supply of experiments you can do, each of which provides good evidence, then it has lots of falsifiability.
(This really deserves to be formalized, in terms of something along the lines of expected bits of net evidence, but I’m not sure how to do so, exactly. Expected bits of evidence does not work, because of scenarios where there is a small chance of lots of evidence being available, but a large chance of no evidence being available.)
Just a note about terminology: “expected bits of evidence” also goes by the name of entropy, and is a good thing to maximize in designing an experiment. (My previous comment on the issue.)
And if I understand you correctly, you’re saying that the problem with entropy as a measure of falsifiability, is that someone can come up with a crank theory that gives the same predictions in every single case, except one that is near impossible to observe, but which, if it happened, would completely vindicate them?
If so, the problem with such theories is that they have to provide a lot of bits to specify that improbable event, which would be penalized under the MML formalism because it lengthens the hypothesis significantly. That may be want you want to work into a measure of falsifiability.
But then, at that point, I’m not sure if you’re measuring falsifiability per se, or just general “epistemic goodness”. It’s okay to have those characteristics you want as a separate desideratum from falsifiability.
That is the criterion which the Bayesian idea of evidence lets you relax. Instead of saying that “you need to be able to define experiments where at least one result would be completely impossible by the theory”, a Bayesian will tell you that “you need to be able to define experiments where the probability of one result under the theory is significantly different from the probability of another result”.
Look at, say, the theory that a coin is weighted towards heads. If you want to be pedantic, no result can “definitely prove” that it is not (unusual events can happen), but an even split of heads and tails (or a weighting towards tails) is much more unusual given that theory than a weighting towards heads.
Edit PS: I am totally stealing the meme that “Bayes is a generalization of Popper” from SilasBarta.
Fair point, and it was EY’s essay that showed me the connection. But keep in mind, the point of the essay is, “Bayesian inference is right, look how Popper is a crippled version of it.”
My point in saying “my” meme is different: “Popper and falsificationism are on the right track—don’t shy away from the concepts entirely just because they’re not sufficiently general.” It’s a warning against taking the failures of Popper to mean that any version of falsificationism is severely flawed.
Steal the meme, and spread it as far and as wide as you possibly can! The sooner it beats out “Popper is so 70 years ago”, the better. (Kind of ironic that Bayes long predated Popper, though the formalization of [what we now call] Bayesian inference did not.)
Example of my academically-respected arch-nemesis arguing the exact anti-falsificationist view I was criticizing.
As Robin’s explained below Bayesianism doesn’t do that. You should also see the works of Lakatos and Quine where they discuss the idea that falsification is flawed because all claims have auxiliary hypotheses and one can’t falsify any hypothesis in isolation even if you are trying to construct a neo-Popperian framework.
Yes, but that still doesn’t show falsificationism to be wrong, as opposed to “narrow” or “insufficiently generalized”. Lakatos and Quine have also failed to show how it’s a problem that you can’t rigidly falsifiy a hypothesis in isolation: Just as you can generalize Popper’s binary “falsified vs. unfalsified” to probabilistic cases, you can construct a Bayes net that shows how your various beliefs (including the auxiliary hypotheses) imply particular observations.
The relative likelihoods they place on the observations allow you to know the relative amount by which those various beliefs are attenuated or amplified by any particular observation. This method gives you the functional equivalent of testing hypotheses in isolation, since some of them will be attenuated the most.
If I remember rightly, that’s where poor old Popper came unstuck: having thought of the falsifiability criterion, he couldn’t work out how to rigorously make it flexible. And as no experiment’s exactly 100% uppercase-D Definitive, that led to some philosophers piling on the idea of falsifiability, as JoshuaZ said.
The key idea is “severe testing”, where a “severe test” is a test likely to expose a specific error in a model, if such an error is present. Those models that pass more, and more severe, tests can be regarded as more useful than those that don’t. This approach also disarms the “auxiliary hypotheses” objection JoshuaZ paraphrased; one can just submit those hypotheses to severe testing too. (I wouldn’t be surprised to find out that’s roughly equivalent to the Bayes net approach SilasBarta mentioned.)
So reading about something in a book is a sensory experience now? I beg to differ. A sensory experience of The Crusades would be witnessing them first hand. The sensory experience of reading about them is perceiving patterns of ink on a piece of paper.
DP
Edit: Also, I think that RobinZ didn’t state that as something that she believed, she stated that as something that she believed the OP meant. It’s that subjective interpretation of his position that I didn’t want to debate. If you wish to adapt that position as your own and debate its substance, we certainly can.
What’s important isn’t the number of degrees of removal, but that the belief’s being true corresponds to different expected sensory experiences of any kind at all than its being false. The sensory experience of perceiving patterns of ink on a piece of paper counts.
Now you could say: “reading about the Crusades in history books is strong evidence that ‘the Crusades happened’ is the current academic consensus,” and you could hypothesize that the academic consensus was wrong. This further hypothesis would lead to further expected sensory data—for instance, examining the documents cited by historians and finding that they must have been forgeries, or whatever.
If you adapt that position, then the belief in ghosts for instance will result in the sensory experience of reading or hearing about them, no? Can you then point to ANY belief that doesn’t result in a sensory experience other than something that you make up yourself out of thin air?
If the concept of sensory experience is to have any meaning at all, you can’t just extrapolate it as you see fit. If you can’t see, hear, smell, taste, or touch an object directly, you have not had sensory experience with that object. That does not mean that that object does not exist though.
DP
Yes, ghost stories are evidence for the existence of ghosts. Just not very strong evidence.
There can be indirect sensory evidence as well as direct.
You are disputing definitions. Reading something in a book is a sort of thing you’d change expectation about depending on your model of the world, as are any other observations. If your beliefs influence your expectation about observations, they are part of your model of reality. On the other hand, if they don’t, they are sometimes too part of your model of reality, but it’s a more subtle point.
And returning to your earlier concerns, consider me having a special insight into the intended meaning, and proving counterexample to the impossibility of continuing the discussion. Reading something in a history book definitely counts as anticipated experience.
Very interesting read on disputing definitions. While the solution proposed there is very clever and elegant, this particular discussion is complicated by the fact that we’re discussing the statements of a person who is not currently participating. Coming up with alternate words to describe our ideas of what “sensory experience” means does nothing to help us understand what he meant by it. Incidentally this is why I didn’t want to get drawn into this debate to begin with.
Also—“consider me having a special insight into the intended meaning”—on what grounds shall I consider your having such special insight?
I’ve closely followed Yudkowsky’s work for a while, and have a pretty good model of what he believes on topics he publicly discusses.
Fair enough. So if, on your authority, the OP believes that reading about something is anticipated experience, does that not then cover every rumor, fairy tale, and flat out non-sense that has ever been written? What then would be an example of a belief that CANNOT be connected to an “anticipated experience”?
See this comment on the first part of your question and this page on the second (but, again, there are valid beliefs that don’t translate into anticipated experience).
I agree wholeheartedly that there are valid beliefs that don’t translate into anticipated experience. As a matter of fact what’s written there was pretty much the exact point that I was trying to make with my very first response in this topic.
Does that not, however, contradict the OP’s assertion that “Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.”? That’s what I took issue with to begin with.
It does contradict that assertion, but not at first approximation, and not in the sense you took the issue with. You have to be very careful if a belief doesn’t translate into anticipated experience. Beliefs about historical facts that don’t translate into anticipated experience (or don’t follow from past experience, that is observations) are usually invalid.
You seem to place a good deal of value on the concept of anticipated experience, but you give it a definition that’s so broad that the overwhelming majority of beliefs will meet the criteria. If the belief in ghosts for instance can lead to the anticipated experience of reading about them in a book, what validity does the notion have as a means of evaluating beliefs?
When a belief (hypothesis) is about reality, it responds to new evidence, or arguments about previously known evidence. It’s reasonable to expect that as a result, some beliefs will turn out incorrect, and some certainly correct. Either way it’s not a problem: you do learn things about the world as a result, whatever the conclusion. You learn that there are no ghosts, but there are rainbows.
The problem are the beliefs that purport to be speaking about reality, but really don’t, and so you become deceived by them. Not being connected to reality through anticipated experience, they take your attention where there is no use for them, influence your decisions for no good reason, and protect themselves by ignoring any knowledge about the world you obtain.
It is a great heuristic to treat any beliefs that don’t translate into anticipated experience with utmost suspicion, or even to run away from them in horror.
How would you learn that there are no ghosts? You form the belief “there are ghosts” which leads to the anticipated experience (by your definition of such) that “I will read about ghosts in a book”, you go and read about ghosts in a book. Criteria met, belief validated. Same goes for UFOs, psychics, astrology etc. What value does the concept of anticipated experience have if it fails to filter out even the most common fallacious beliefs?
That there are books about ghosts is evidence for ghosts existing (but also for lots of other things). There are also arguments against this hypothesis, both a priori and observational. A good model/theory also explains why you’d read about ghosts even though there is no such thing.
You’re not addressing my core point though. If the criteria of anticipated experience as you define it is as likely to be satisfied by fallacious beliefs as it is by valid ones, what purpose does it serve?
I addressed that question in this comment; if something is unclear, ask away. The difference is between a belief that is incorrect, and a belief that is not even wrong.
Alright, I think I see what you’re getting it, but I still can’t help but think that your definition of sensory experience is too broad to be really useful. I mean the only type of belief that it seems to filter out is absolute nonsense like “I have a third leg that I can never see or feel”, did I get that about right?
Yes. It happens all the time. It’s one way nonsense protects itself, to persist for a long time in minds of individual people and cultures.
(More generally, see anti-epistemology.)
So essentially what you and Eliezer are referring to as “anticipated experience” is just basic falsifiability then?
With a bayesian twist: things don’t actually get falsified, don’t become wrong with absolute certainty, rather observations can adjust your level of belief.
Ok, I understand what you mean now. Now that you’ve clarified what Eliezer meant by anticipated experience my original objection to it is no longer applicable. Thank you for an interesting and thought provoking discussion.
Slightly OT, but this relates to something that really bugs me. People often bring up the importance of statistical analysis and the possibility of flukes/lab error, in order to prove that, “Popper was totally wrong, we get to completely ignore him and this out-dated, long-refuted notion of falsifiability.”
But the way I see it, this doesn’t refute Popper, or the notion of falsifiability: it just means we’ve generalized the notion to probabilistic cases, instead of just the binary categorization of “unfalsified” vs. “falsified”. This seems like an extension of Popper/falsifiability rather than a refutation of it. Go fig.
I reached much clearer understanding once I’ve peeled away the structure of probability measure and got down to mathematically crisp events on sample spaces (classes of possible worlds). From this perspective, there are falsifiable concepts, but they usually don’t constitute useful statements, so we work with the ones that can’t be completely falsified, even though parts of them (some of the possible worlds included in them) do get falsified all the time, when you observe something.
Isn’t that like saying we’ve generalized the theory that “all is fire” to cases where the universe is only part fire? If falsification is absolute then Popper’s insight that “all is falsification” is just plain wrong; if falsification is probabilistic then surely the relevant ideas existed before Popper as probability theory. It’s not like Popper invented the notion that if a hypothesis is falsified we shouldn’t believe it.
Falsifiability can be quantified, in bits. If the only test you have for whether something’s true or not is something lame like whether it appears in stories or not, then you have a tiny amount of falsifiability. If there is a large supply of experiments you can do, each of which provides good evidence, then it has lots of falsifiability.
(This really deserves to be formalized, in terms of something along the lines of expected bits of net evidence, but I’m not sure how to do so, exactly. Expected bits of evidence does not work, because of scenarios where there is a small chance of lots of evidence being available, but a large chance of no evidence being available.)
Just a note about terminology: “expected bits of evidence” also goes by the name of entropy, and is a good thing to maximize in designing an experiment. (My previous comment on the issue.)
And if I understand you correctly, you’re saying that the problem with entropy as a measure of falsifiability, is that someone can come up with a crank theory that gives the same predictions in every single case, except one that is near impossible to observe, but which, if it happened, would completely vindicate them?
If so, the problem with such theories is that they have to provide a lot of bits to specify that improbable event, which would be penalized under the MML formalism because it lengthens the hypothesis significantly. That may be want you want to work into a measure of falsifiability.
But then, at that point, I’m not sure if you’re measuring falsifiability per se, or just general “epistemic goodness”. It’s okay to have those characteristics you want as a separate desideratum from falsifiability.
Isn’t it an essential criteria of falsifiability to be able to design an experiment that can DEFINITIVELY prove the theory false?
That is the criterion which the Bayesian idea of evidence lets you relax. Instead of saying that “you need to be able to define experiments where at least one result would be completely impossible by the theory”, a Bayesian will tell you that “you need to be able to define experiments where the probability of one result under the theory is significantly different from the probability of another result”.
Look at, say, the theory that a coin is weighted towards heads. If you want to be pedantic, no result can “definitely prove” that it is not (unusual events can happen), but an even split of heads and tails (or a weighting towards tails) is much more unusual given that theory than a weighting towards heads.
Edit PS: I am totally stealing the meme that “Bayes is a generalization of Popper” from SilasBarta.
I’m pretty sure that was handily discussed in An Intuitive Explanation of Bayes’s Theorem and A Technical Explanation of Technical Explanation.
Fair point, and it was EY’s essay that showed me the connection. But keep in mind, the point of the essay is, “Bayesian inference is right, look how Popper is a crippled version of it.”
My point in saying “my” meme is different: “Popper and falsificationism are on the right track—don’t shy away from the concepts entirely just because they’re not sufficiently general.” It’s a warning against taking the failures of Popper to mean that any version of falsificationism is severely flawed.
Ehhcks-cellent!
Steal the meme, and spread it as far and as wide as you possibly can! The sooner it beats out “Popper is so 70 years ago”, the better. (Kind of ironic that Bayes long predated Popper, though the formalization of [what we now call] Bayesian inference did not.)
Example of my academically-respected arch-nemesis arguing the exact anti-falsificationist view I was criticizing.
As Robin’s explained below Bayesianism doesn’t do that. You should also see the works of Lakatos and Quine where they discuss the idea that falsification is flawed because all claims have auxiliary hypotheses and one can’t falsify any hypothesis in isolation even if you are trying to construct a neo-Popperian framework.
Yes, but that still doesn’t show falsificationism to be wrong, as opposed to “narrow” or “insufficiently generalized”. Lakatos and Quine have also failed to show how it’s a problem that you can’t rigidly falsifiy a hypothesis in isolation: Just as you can generalize Popper’s binary “falsified vs. unfalsified” to probabilistic cases, you can construct a Bayes net that shows how your various beliefs (including the auxiliary hypotheses) imply particular observations.
The relative likelihoods they place on the observations allow you to know the relative amount by which those various beliefs are attenuated or amplified by any particular observation. This method gives you the functional equivalent of testing hypotheses in isolation, since some of them will be attenuated the most.
Right, I was speaking in a non-Bayesian context.
If I remember rightly, that’s where poor old Popper came unstuck: having thought of the falsifiability criterion, he couldn’t work out how to rigorously make it flexible. And as no experiment’s exactly 100% uppercase-D Definitive, that led to some philosophers piling on the idea of falsifiability, as JoshuaZ said.
But more recent work in philosophy of science suggests a more sophisticated way to talk about how falsifiability can work in the real world.
The key idea is “severe testing”, where a “severe test” is a test likely to expose a specific error in a model, if such an error is present. Those models that pass more, and more severe, tests can be regarded as more useful than those that don’t. This approach also disarms the “auxiliary hypotheses” objection JoshuaZ paraphrased; one can just submit those hypotheses to severe testing too. (I wouldn’t be surprised to find out that’s roughly equivalent to the Bayes net approach SilasBarta mentioned.)
At the bottom of the sidebar at the bottom, you will find a list of top contributors; Vladimir Nesov is on the list.