With a bayesian twist: things don’t actually get falsified, don’t become wrong with absolute certainty, rather observations can adjust your level of belief.
Ok, I understand what you mean now. Now that you’ve clarified what Eliezer meant by anticipated experience my original objection to it is no longer applicable. Thank you for an interesting and thought provoking discussion.
Slightly OT, but this relates to something that really bugs me. People often bring up the importance of statistical analysis and the possibility of flukes/lab error, in order to prove that, “Popper was totally wrong, we get to completely ignore him and this out-dated, long-refuted notion of falsifiability.”
But the way I see it, this doesn’t refute Popper, or the notion of falsifiability: it just means we’ve generalized the notion to probabilistic cases, instead of just the binary categorization of “unfalsified” vs. “falsified”. This seems like an extension of Popper/falsifiability rather than a refutation of it. Go fig.
I reached much clearer understanding once I’ve peeled away the structure of probability measure and got down to mathematically crisp events on sample spaces (classes of possible worlds). From this perspective, there are falsifiable concepts, but they usually don’t constitute useful statements, so we work with the ones that can’t be completely falsified, even though parts of them (some of the possible worlds included in them) do get falsified all the time, when you observe something.
Isn’t that like saying we’ve generalized the theory that “all is fire” to cases where the universe is only part fire? If falsification is absolute then Popper’s insight that “all is falsification” is just plain wrong; if falsification is probabilistic then surely the relevant ideas existed before Popper as probability theory. It’s not like Popper invented the notion that if a hypothesis is falsified we shouldn’t believe it.
So essentially what you and Eliezer are referring to as “anticipated experience” is just basic falsifiability then?
With a bayesian twist: things don’t actually get falsified, don’t become wrong with absolute certainty, rather observations can adjust your level of belief.
Ok, I understand what you mean now. Now that you’ve clarified what Eliezer meant by anticipated experience my original objection to it is no longer applicable. Thank you for an interesting and thought provoking discussion.
Slightly OT, but this relates to something that really bugs me. People often bring up the importance of statistical analysis and the possibility of flukes/lab error, in order to prove that, “Popper was totally wrong, we get to completely ignore him and this out-dated, long-refuted notion of falsifiability.”
But the way I see it, this doesn’t refute Popper, or the notion of falsifiability: it just means we’ve generalized the notion to probabilistic cases, instead of just the binary categorization of “unfalsified” vs. “falsified”. This seems like an extension of Popper/falsifiability rather than a refutation of it. Go fig.
I reached much clearer understanding once I’ve peeled away the structure of probability measure and got down to mathematically crisp events on sample spaces (classes of possible worlds). From this perspective, there are falsifiable concepts, but they usually don’t constitute useful statements, so we work with the ones that can’t be completely falsified, even though parts of them (some of the possible worlds included in them) do get falsified all the time, when you observe something.
Isn’t that like saying we’ve generalized the theory that “all is fire” to cases where the universe is only part fire? If falsification is absolute then Popper’s insight that “all is falsification” is just plain wrong; if falsification is probabilistic then surely the relevant ideas existed before Popper as probability theory. It’s not like Popper invented the notion that if a hypothesis is falsified we shouldn’t believe it.