A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?
Given no other information, we don’t know which is more likely. We need numbers for “rarely”, “most”, and “exceedingly few”. For example, if 10% of humans currently have a cold, and 1% of humans with a cold have a headache, but 1% of humans have a brain tumor, then the brain tumor is actually more likely.
(The calculation we’re performing is: compare (“rarely” times “most”) to “exceedingly few” and see which one is larger.)
You’re missing the point. This post is suitable for an audience whose eyes would glaze over if you threw in numbers, which is wonderful (I read the “Intuitive Explanation of Bayes’ Theorem” and was ranting for days about how there was not one intuitive thing about it! it was all numbers! and graphs!). Adding numbers would make it more strictly accurate but would not improve anyone’s understanding. Anyone who would understand better if numbers were provided has their needs adequately served by the “Intuitive” explanation.
Agreed, I did not find the “Intuitive Explanation” to be particularly intuitive even after multiple readings. Understanding the math and principles is one thing, but this post actually made me sit up and go, “Oh, now I see what all the fuss is about,” outside a relatively narrow range of issues like diagnosing cancer or identifying spam emails.
Now I get it well enough to summarize: “Even if A will always cause B, that doesn’t mean A did cause B. If B would happen anyway, this tells you nothing about whether A caused B.”
Which is both a “well duh” and an important idea at the same time, when you consider that our brains appear to be built to latch onto the first “A” that would cause B, and then stubbornly hang onto it until it can be conclusively disproven.
That’s a “click” right there, that makes retroactively comprehensible many reams of Eliezer’s math rants and Beisutsukai stories. (Well, not that I didn’t comprehend them as such… more that I wasn’t able to intuitively recreate all the implications that I now think he was expecting his readers to take away.)
So, yeah… this is way too important of an idea to have math associated with it in any way. ;-)
Personally it bothers me that the explanation asks a question which is numerically unanswerable, and then asserts that rationalists would answer it in a given way. Simple explanations are good, but not when they contain statements which are factually incorrect.
But, looking at the karma scores it appears that you are correct that this is better for many people. ^_^;
I thought Truly Part of you is an excellent introduction to rationalism/Bayesianism/Less Wrong philosophy that avoids much use of numbers, graphs, and technical language. So I think it’s more appropriate for the average person, or for people that equations don’t appeal to.
Hmmmm.… that’s an interesting article too, but it focuses on a different
question, the question what knowledge really means, and uses AI concepts to
discuss that (somewhat related to Searle’s Chinese
Roomgedankenexperiment.)
However, I think the article discussed here is a bit more directly connected
to Bayesianism. It’s clear what Bayes Theorem means, but what many people
today mean with Bayesianism, is somewhat of a loose extrapolation of that --
or even just a metaphor.
I think the article does a good job at explaining the current use.
Given no other information, we don’t know which is more likely. We need numbers for “rarely”, “most”, and “exceedingly few”. For example, if 10% of humans currently have a cold, and 1% of humans with a cold have a headache, but 1% of humans have a brain tumor, then the brain tumor is actually more likely.
(The calculation we’re performing is: compare (“rarely” times “most”) to “exceedingly few” and see which one is larger.)
You’re missing the point. This post is suitable for an audience whose eyes would glaze over if you threw in numbers, which is wonderful (I read the “Intuitive Explanation of Bayes’ Theorem” and was ranting for days about how there was not one intuitive thing about it! it was all numbers! and graphs!). Adding numbers would make it more strictly accurate but would not improve anyone’s understanding. Anyone who would understand better if numbers were provided has their needs adequately served by the “Intuitive” explanation.
Agreed, I did not find the “Intuitive Explanation” to be particularly intuitive even after multiple readings. Understanding the math and principles is one thing, but this post actually made me sit up and go, “Oh, now I see what all the fuss is about,” outside a relatively narrow range of issues like diagnosing cancer or identifying spam emails.
Now I get it well enough to summarize: “Even if A will always cause B, that doesn’t mean A did cause B. If B would happen anyway, this tells you nothing about whether A caused B.”
Which is both a “well duh” and an important idea at the same time, when you consider that our brains appear to be built to latch onto the first “A” that would cause B, and then stubbornly hang onto it until it can be conclusively disproven.
That’s a “click” right there, that makes retroactively comprehensible many reams of Eliezer’s math rants and Beisutsukai stories. (Well, not that I didn’t comprehend them as such… more that I wasn’t able to intuitively recreate all the implications that I now think he was expecting his readers to take away.)
So, yeah… this is way too important of an idea to have math associated with it in any way. ;-)
Personally it bothers me that the explanation asks a question which is numerically unanswerable, and then asserts that rationalists would answer it in a given way. Simple explanations are good, but not when they contain statements which are factually incorrect.
But, looking at the karma scores it appears that you are correct that this is better for many people. ^_^;
I thought Truly Part of you is an excellent introduction to rationalism/Bayesianism/Less Wrong philosophy that avoids much use of numbers, graphs, and technical language. So I think it’s more appropriate for the average person, or for people that equations don’t appeal to.
Does anyone who meets that description agree?
And could someone ask Alicorn if she prefers it?
Hmmmm.… that’s an interesting article too, but it focuses on a different question, the question what knowledge really means, and uses AI concepts to discuss that (somewhat related to Searle’s Chinese Room gedankenexperiment.)
However, I think the article discussed here is a bit more directly connected to Bayesianism. It’s clear what Bayes Theorem means, but what many people today mean with Bayesianism, is somewhat of a loose extrapolation of that -- or even just a metaphor.
I think the article does a good job at explaining the current use.