Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified. At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Is this true? Popper wrote LScD in 1934. Keynes and Ramsey wrote about using probability to handle uncertainty in the 1920s although I don’t think anyone paid attention to that work for a few years. I don’t know enough about their work in detail to comment on whether or not Popper is taking it into account although I certainly get the impression that he’s influenced by Keynes.
According to the wikipedia page, Cox’s theorem first appeared in R. T. Cox, “Probability, Frequency, and Reasonable Expectation,” Am. Jour. Phys., 14, 1–13, (1946). Prior to that, I don’t think probability had much in the way of philosophical foundations, although they may’ve gotten the technical side right. And correct use of probability for more complex things, like causal models, didn’t come until much later. (And Popper was dealing with the case of science-in-general, which requires those sorts of advanced tools.)
The English version of LScD came out in 1959. It wasn’t a straight translation; Popper worked on it. In my (somewhat vague) understanding he changed some stuff or at least added some footnotes (and appendices?).
Anyway Popper published plenty of stuff after 1946 including material from the LScD postscript that got split into several books, and also various books where he had the chance to say whatever he wanted. If he thought there was anything important to update he would have. And for example probability gets a lot of discussion in Popper’s replies to his critics, and Bayes’ theorem in particular comes up some; that’s from 1974.
So for example on page 1185 of the Schilpp volume 2, Popper says he never doubted Bayes’ theorem but that “it is not generally applicable to hypotheses which form an infinite set”.
How can something be partially falsified? It’s either consistent with the evidence or contradicted. This is a dichotomy. To allow partial falsification you have to judge in some other way which has a larger number of outcomes. What way?
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified.
You’re saying you started without them, and come up with some in the middle. But how does that work? How do you get started without having any?
the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Changing the math cannot answer any of his non-mathematical criticisms. So his challenge remains.
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified. At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Is this true? Popper wrote LScD in 1934. Keynes and Ramsey wrote about using probability to handle uncertainty in the 1920s although I don’t think anyone paid attention to that work for a few years. I don’t know enough about their work in detail to comment on whether or not Popper is taking it into account although I certainly get the impression that he’s influenced by Keynes.
According to the wikipedia page, Cox’s theorem first appeared in R. T. Cox, “Probability, Frequency, and Reasonable Expectation,” Am. Jour. Phys., 14, 1–13, (1946). Prior to that, I don’t think probability had much in the way of philosophical foundations, although they may’ve gotten the technical side right. And correct use of probability for more complex things, like causal models, didn’t come until much later. (And Popper was dealing with the case of science-in-general, which requires those sorts of advanced tools.)
The English version of LScD came out in 1959. It wasn’t a straight translation; Popper worked on it. In my (somewhat vague) understanding he changed some stuff or at least added some footnotes (and appendices?).
Anyway Popper published plenty of stuff after 1946 including material from the LScD postscript that got split into several books, and also various books where he had the chance to say whatever he wanted. If he thought there was anything important to update he would have. And for example probability gets a lot of discussion in Popper’s replies to his critics, and Bayes’ theorem in particular comes up some; that’s from 1974.
So for example on page 1185 of the Schilpp volume 2, Popper says he never doubted Bayes’ theorem but that “it is not generally applicable to hypotheses which form an infinite set”.
How can something be partially falsified? It’s either consistent with the evidence or contradicted. This is a dichotomy. To allow partial falsification you have to judge in some other way which has a larger number of outcomes. What way?
You’re saying you started without them, and come up with some in the middle. But how does that work? How do you get started without having any?
Changing the math cannot answer any of his non-mathematical criticisms. So his challenge remains.