When we find components, how will we know which one to call “rationality”? If the point is that it “predicts accurate belief-formation”, doesn’t the test interpreter need to know more about what beliefs are accurate than the average test-taker? We might have preconceptions as to what answers accurate belief-formers may give on questions not about beliefs, but it seems to me these aren’t very helpful unless they’re very strong, and my preconceptions on this aren’t very strong.
Section D will have well-defined right answers. Some will even be unknown to test-takers and long-time LW readers, e.g. we can ask for confidence intervals on unfamiliar trivia and we can see if the real answers to the trivia problems fall within questioners’ 99% confidence intervals or not.
You’re right about section E as far as knowing a definite interpretation ahead of time goes. But those of us who think we know the right beliefs for some of the above (e.g., the question on religious views) can go ahead and interpret, and I’ll post the aggregate data on the web so that those with different interpretations of the right answers can interpret differently.
Also, if we find that an unexpected answer to one of the questions on section E correlates with the “best-guess-right” answers to othe other section E questions, and to correct answers on section D, and to e.g. trying to seek information… that’ll be evidence that that unexpected answer might be correct after all. And so, if we eventually develop an “actually yielding rationality scores” version of this test, we could either skip that question, or score that question in the unexpected direction.
It seems like there are two goals here: using our opinions of what beliefs are true to find out whether people are rational, and using our opinions (informed by the test) of whether people are rational to find out what beliefs are true. (We can use some of the information in one direction and some of the information in the other direction, but we can’t use any of the information in both directions, so to speak.) From the latter perspective I think it might be very useful to just ask people a lot of probabilistic questions about “big issues”, and don’t ask them about personal stuff so they can attach the estimates to their screennames. Maybe this needs a way to avoid commitment pressures.
I agree that it might be interesting to know what correlates with getting H&B questions right, but I’m not sure getting H&B questions right translates that well to rationality in general, especially on the right end of the curve.
When we find components, how will we know which one to call “rationality”? If the point is that it “predicts accurate belief-formation”, doesn’t the test interpreter need to know more about what beliefs are accurate than the average test-taker? We might have preconceptions as to what answers accurate belief-formers may give on questions not about beliefs, but it seems to me these aren’t very helpful unless they’re very strong, and my preconceptions on this aren’t very strong.
Section D will have well-defined right answers. Some will even be unknown to test-takers and long-time LW readers, e.g. we can ask for confidence intervals on unfamiliar trivia and we can see if the real answers to the trivia problems fall within questioners’ 99% confidence intervals or not.
You’re right about section E as far as knowing a definite interpretation ahead of time goes. But those of us who think we know the right beliefs for some of the above (e.g., the question on religious views) can go ahead and interpret, and I’ll post the aggregate data on the web so that those with different interpretations of the right answers can interpret differently.
Also, if we find that an unexpected answer to one of the questions on section E correlates with the “best-guess-right” answers to othe other section E questions, and to correct answers on section D, and to e.g. trying to seek information… that’ll be evidence that that unexpected answer might be correct after all. And so, if we eventually develop an “actually yielding rationality scores” version of this test, we could either skip that question, or score that question in the unexpected direction.
It seems like there are two goals here: using our opinions of what beliefs are true to find out whether people are rational, and using our opinions (informed by the test) of whether people are rational to find out what beliefs are true. (We can use some of the information in one direction and some of the information in the other direction, but we can’t use any of the information in both directions, so to speak.) From the latter perspective I think it might be very useful to just ask people a lot of probabilistic questions about “big issues”, and don’t ask them about personal stuff so they can attach the estimates to their screennames. Maybe this needs a way to avoid commitment pressures.
I agree that it might be interesting to know what correlates with getting H&B questions right, but I’m not sure getting H&B questions right translates that well to rationality in general, especially on the right end of the curve.