Another advantage of this type of CAPTCHA is that it doesn’t discriminate against intelligent computer programs who aren’t very good at visual character recognition.
How would you feel about a less quantitative, more philosophical reasoning test?
A simpler idea might be to just have a karma filter, so no one below a threshold karma value on LW could post on meta.
A more philosophical reasoning test would not feature scary scary math. It would probably, of necessity, be more subjective, which could create a bottleneck in processing test results if we didn’t want to arbitrarily limit possible responses and miss out on some possible nuance.
But now that you’ve actually been here for a while, you probably wouldn’t find it as much of a barrier. Right? So it wouldn’t be so much of a math filter, as a having-read-LW filter, which is what we want.
I didn’t learn about Bayes’ theorem for the first time on LW; I learned it in my epistemology class when I was a sophomore in college. Having read LW has not made me better at or more affectionate towards or more enthusiastic about spending time on math. (It probably has contributed towards convincing me that if I devoted a lot of time to it, I could become good at math, but hasn’t motivated me to do so.) I’ve come to value participating on LW enough that I’d solve a simple Bayes problem to stay. (Or at least goad a friend into giving me the answer.)
But my point wasn’t about me so much—it was about future possible contributors. Assuming people here think it’s good to have me around, introducing barrier conditions that would have deterred me may be unwise, because they could deter people like me.
I’m curious to see an example or two of what these Bayesian problems might look like, if anybody has any ideas. I mean, it may be relevant to know just what difficulty level this test would be. Of course, what’s simple for some LessWrong contributors is probably not simple for everyone.
The standard one goes something like, “The dangerous disease itchyballitis has a frequency of 1% in the general population of men. The test for the disease has an accuracy of 95% (for both false positives and false negatives). A randomly selected dude gets tested and the result is positive. What’s the probability he has the disease?”
But most people get that wrong. A correct answer is more likely when the problem is phrased in equivalent but more concrete terms as follows: “The dangerous disease itchyballitis affects 100 out of 10,000 men. The test for the disease gives the correct answer 95 times out of 100. A randomly selected dude gets tested and the result is positive. What’s the chance he has the disease?”
Or, for the approximate answer, just compare the base rate with the false positive rate (multiplying by .9something has small impacts that mostly cancel out). About 1% of people test positive due to having the disease (a bit less, actually), about 5% of people test positive because of an inaccurate test (a bit less, actually), so a person with a positive test has about a 1 in 6 chance of having the disease.
It’s more intuitive to use odds. Prior odds are 1:99, likelihood ratio (strength of evidence) given by a positive test is 95:5, so posterior odds are (1:99)*(95:5)=19:99, or probability of 19⁄118 (about 16%).
If that genuinely became a problem, we could require people to solve a simple Bayesian problem before registering.
Sort of a rationalism troll CAPTCHA. I’d like it, but solving those problems requires math—I probably would never have joined if I’d had to do it.
Another advantage of this type of CAPTCHA is that it doesn’t discriminate against intelligent computer programs who aren’t very good at visual character recognition.
How would you feel about a less quantitative, more philosophical reasoning test?
A simpler idea might be to just have a karma filter, so no one below a threshold karma value on LW could post on meta.
A more philosophical reasoning test would not feature scary scary math. It would probably, of necessity, be more subjective, which could create a bottleneck in processing test results if we didn’t want to arbitrarily limit possible responses and miss out on some possible nuance.
But now that you’ve actually been here for a while, you probably wouldn’t find it as much of a barrier. Right? So it wouldn’t be so much of a math filter, as a having-read-LW filter, which is what we want.
I didn’t learn about Bayes’ theorem for the first time on LW; I learned it in my epistemology class when I was a sophomore in college. Having read LW has not made me better at or more affectionate towards or more enthusiastic about spending time on math. (It probably has contributed towards convincing me that if I devoted a lot of time to it, I could become good at math, but hasn’t motivated me to do so.) I’ve come to value participating on LW enough that I’d solve a simple Bayes problem to stay. (Or at least goad a friend into giving me the answer.)
But my point wasn’t about me so much—it was about future possible contributors. Assuming people here think it’s good to have me around, introducing barrier conditions that would have deterred me may be unwise, because they could deter people like me.
I’m curious to see an example or two of what these Bayesian problems might look like, if anybody has any ideas. I mean, it may be relevant to know just what difficulty level this test would be. Of course, what’s simple for some LessWrong contributors is probably not simple for everyone.
The standard one goes something like, “The dangerous disease itchyballitis has a frequency of 1% in the general population of men. The test for the disease has an accuracy of 95% (for both false positives and false negatives). A randomly selected dude gets tested and the result is positive. What’s the probability he has the disease?”
But most people get that wrong. A correct answer is more likely when the problem is phrased in equivalent but more concrete terms as follows: “The dangerous disease itchyballitis affects 100 out of 10,000 men. The test for the disease gives the correct answer 95 times out of 100. A randomly selected dude gets tested and the result is positive. What’s the chance he has the disease?”
Or, for the approximate answer, just compare the base rate with the false positive rate (multiplying by .9something has small impacts that mostly cancel out). About 1% of people test positive due to having the disease (a bit less, actually), about 5% of people test positive because of an inaccurate test (a bit less, actually), so a person with a positive test has about a 1 in 6 chance of having the disease.
p(test_positive|itchyballitis) = 0.95
p(test_positive|!itchyballitis) = 0.05
p(itchyballitis) = 0.01
p(test_positive) = p(test_positive|itchyballitis) * p(itchyballitis) + p(test_positive|!itchyballitis) * p(!itchiballitis)
= 0.95 * 0.01 + 0.05 * 0.99
= 0.059
p(itchyballitis|test_positive) = (p(test_positive|itchyballitis) * p(itchyballitis)) / p(test_positive)
= (0.95 * 0.01) / 0.059
= 0.161
Edit: If anyone else is thinking of writing math stuff in a comment, don’t do what I did. Read http://wiki.lesswrong.com/wiki/Comment_formatting first! Also, thanks Vladimir_Nesov.
It’s more intuitive to use odds. Prior odds are 1:99, likelihood ratio (strength of evidence) given by a positive test is 95:5, so posterior odds are (1:99)*(95:5)=19:99, or probability of 19⁄118 (about 16%).
Use backslash before stars \* to have them in the comment * without turning text into italics.
I agree with Alicorn. Unless you want an echo chamber, math problems seem like a bad filter. Diversity is valuable.
You don’t think there is diversity of thought among trained mathematicians?
Is there a small Bayesian quiz of sorts lying around here somewhere? I would certainly benefit from such a thing while learning the ropes.