Bayes? To paraphrase you, the philosophy of science has moved on a bit since the 1700s.
I’ve already read Yudkowsky’s article. I’m familar with Bayes since high school which is thirty years ago. I was one of those kids who found it very easy to grasp. It’s just a simple mathematical fact and certainly nothing to build a philosophy of science around.
Yudkowsky writes:
“What is the so-called Bayesian Revolution now sweeping through the sciences, which claims to subsume even the experimental method itself as a special case?”
Really? So the “experimental method” which by which I assume he means the “scientific method” boils down to a special case of what, a probability calculation?
Come on, how gullible do you think I am? So under this view scientists have been, all along, calculating probability frequencies based on observations and just picking the theory that had the highest probability. Heck that sounds easy. Guess I’ll just write an scientist simulating AI program over the weekend with that. We’ll just replace all the real scientists with Bayesian decision models.
So, according to this view, if I go back to and do some historical investigation on The Great Devonian Controversy I should find that the scientists involved were calculating and comparing notes on the probabilities they were getting on all the Bayesian calculations they were doing. Right?
This, besides being ludicrous, is just not how people reason, as admitted by Yudkowsky in the very same article:
“Bayesian reasoning is very counterintuitive. People do not employ Bayesian reasoning intuitively, find it very difficult to learn Bayesian reasoning when tutored, and rapidly forget Bayesian methods once the tutoring is over. This holds equally true for novice students and highly trained professionals in a field. Bayesian reasoning is apparently one of those things which, like quantum mechanics or the Wason Selection Test, is inherently difficult for humans to grasp with our built-in mental faculties.”
As my other comment pointed out in a quote from your own article it’s pretty clear that scientists do not choose to believe or not to believe based on calculating Bayes probabilities. They do no use Bayes for good reasons. Often they don’t have any underlying probabilities and have unknown distributions, but furthermore most problems don’t reduce to a Bayesian probability.
It’s hard to imagine that Darwin did a explicit Bayesian calculation in order to choose his theory over Lamarckism, let alone come up with the theory in the first place. It’s even harder to imagine that he did it implicitly in his mind when it is quite clear that a) “People do not employ Bayesian reasoning intuitively”
b) Bayes theorem only applies in special cases where you have known probabilities, and distributions.
In the Devonian Controversy no probabilities or distributions were known, no probability calculations done, and finally the winning hypothesis was NOT picked on the basis of greatest probability. There was no greatest probability and there was no winner. All competing hypotheses were rejected.
If intelligence and understanding of the natural world were just a matter of applying Bayesian logic don’t you think that not just human brains, but brains in general would likely be selected for that already? We should be good at it, not bad.
The human brain has evolved lots of subunits that are good as solving lots of different kinds of problems. Heck, even crows seem to be able to count. We seem to be able to classify and model things as consistent or inconsistent, periodic, dangerous, slow, fast, level, slanted, near, far, etc. These are mental models or modules that are probably prefabricated. Yet we humans don’t seem to have any prefabricated Bayesian deduction unit built in (by the way Bayes induction is based on deduction not induction, funny that). It’s actually the other way round from what he wrote. Bayesian Induction is subsumed by a scientific method characterized by Popperian Falsification (actually pan-critical rationalism).
Don’t be confused by the name. Popperian falsification is not merely about falsification any more than the Theory of Natural Selection is merely about “survival of the fittest”. Popperian falsification is about holding beliefs tentatively and testing them. Thus one might hold as a belief that Bayesian Induction works. Although you will find that this induction is more about deducing from some assumed base probabilities. Probabilities that are founded on tentatively believing you did your measurements correctly. So on and so forth.
In his example there is plenty of room for falsification:
“1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?”
The above can be addressed from many Popperian angles. Are our assumptions correct? How did we come up with that number 1%. That could be falsified by pointing out that our sample was biased. Perhaps the women were lying about their ages. What measures were taken to verify their ages? Where did the 80% claim come from. Is that from a particular demographic that matches the woman we are talking about? So on and so forth.
It’s not just a matter of plugging numbers into a Bayesian equation and the answer falls out. Yeah, you can use Bayesian logic to create a decision model. That doesn’t mean you should believe or act on the result. It doesn’t mean it’s the primary method being used.
Tim,
Bayes? To paraphrase you, the philosophy of science has moved on a bit since the 1700s.
I’ve already read Yudkowsky’s article. I’m familar with Bayes since high school which is thirty years ago. I was one of those kids who found it very easy to grasp. It’s just a simple mathematical fact and certainly nothing to build a philosophy of science around.
Yudkowsky writes:
Really? So the “experimental method” which by which I assume he means the “scientific method” boils down to a special case of what, a probability calculation?
Come on, how gullible do you think I am? So under this view scientists have been, all along, calculating probability frequencies based on observations and just picking the theory that had the highest probability. Heck that sounds easy. Guess I’ll just write an scientist simulating AI program over the weekend with that. We’ll just replace all the real scientists with Bayesian decision models.
So, according to this view, if I go back to and do some historical investigation on The Great Devonian Controversy I should find that the scientists involved were calculating and comparing notes on the probabilities they were getting on all the Bayesian calculations they were doing. Right?
This, besides being ludicrous, is just not how people reason, as admitted by Yudkowsky in the very same article:
As my other comment pointed out in a quote from your own article it’s pretty clear that scientists do not choose to believe or not to believe based on calculating Bayes probabilities. They do no use Bayes for good reasons. Often they don’t have any underlying probabilities and have unknown distributions, but furthermore most problems don’t reduce to a Bayesian probability.
It’s hard to imagine that Darwin did a explicit Bayesian calculation in order to choose his theory over Lamarckism, let alone come up with the theory in the first place. It’s even harder to imagine that he did it implicitly in his mind when it is quite clear that a) “People do not employ Bayesian reasoning intuitively”
b) Bayes theorem only applies in special cases where you have known probabilities, and distributions.
In the Devonian Controversy no probabilities or distributions were known, no probability calculations done, and finally the winning hypothesis was NOT picked on the basis of greatest probability. There was no greatest probability and there was no winner. All competing hypotheses were rejected.
If intelligence and understanding of the natural world were just a matter of applying Bayesian logic don’t you think that not just human brains, but brains in general would likely be selected for that already? We should be good at it, not bad.
The human brain has evolved lots of subunits that are good as solving lots of different kinds of problems. Heck, even crows seem to be able to count. We seem to be able to classify and model things as consistent or inconsistent, periodic, dangerous, slow, fast, level, slanted, near, far, etc. These are mental models or modules that are probably prefabricated. Yet we humans don’t seem to have any prefabricated Bayesian deduction unit built in (by the way Bayes induction is based on deduction not induction, funny that). It’s actually the other way round from what he wrote. Bayesian Induction is subsumed by a scientific method characterized by Popperian Falsification (actually pan-critical rationalism).
Don’t be confused by the name. Popperian falsification is not merely about falsification any more than the Theory of Natural Selection is merely about “survival of the fittest”. Popperian falsification is about holding beliefs tentatively and testing them. Thus one might hold as a belief that Bayesian Induction works. Although you will find that this induction is more about deducing from some assumed base probabilities. Probabilities that are founded on tentatively believing you did your measurements correctly. So on and so forth.
In his example there is plenty of room for falsification:
The above can be addressed from many Popperian angles. Are our assumptions correct? How did we come up with that number 1%. That could be falsified by pointing out that our sample was biased. Perhaps the women were lying about their ages. What measures were taken to verify their ages? Where did the 80% claim come from. Is that from a particular demographic that matches the woman we are talking about? So on and so forth.
It’s not just a matter of plugging numbers into a Bayesian equation and the answer falls out. Yeah, you can use Bayesian logic to create a decision model. That doesn’t mean you should believe or act on the result. It doesn’t mean it’s the primary method being used.