It seems like there are two goals here: using our opinions of what beliefs are true to find out whether people are rational, and using our opinions (informed by the test) of whether people are rational to find out what beliefs are true. (We can use some of the information in one direction and some of the information in the other direction, but we can’t use any of the information in both directions, so to speak.) From the latter perspective I think it might be very useful to just ask people a lot of probabilistic questions about “big issues”, and don’t ask them about personal stuff so they can attach the estimates to their screennames. Maybe this needs a way to avoid commitment pressures.
I agree that it might be interesting to know what correlates with getting H&B questions right, but I’m not sure getting H&B questions right translates that well to rationality in general, especially on the right end of the curve.
It seems like there are two goals here: using our opinions of what beliefs are true to find out whether people are rational, and using our opinions (informed by the test) of whether people are rational to find out what beliefs are true. (We can use some of the information in one direction and some of the information in the other direction, but we can’t use any of the information in both directions, so to speak.) From the latter perspective I think it might be very useful to just ask people a lot of probabilistic questions about “big issues”, and don’t ask them about personal stuff so they can attach the estimates to their screennames. Maybe this needs a way to avoid commitment pressures.
I agree that it might be interesting to know what correlates with getting H&B questions right, but I’m not sure getting H&B questions right translates that well to rationality in general, especially on the right end of the curve.