For ‘hot’ political and religious biases, create materials in which apparent advocates of different ideologies or parties are arguing for some particular empirical prediction, e.g. about the relationship between different tax rate changes and economic growth, with some predictions being right and some wrong. The subject then needs to make his or her own prediction about some easily-verifiable but obscure empirical fact related to the argument, e.g. whether a graph of GDP and tax rates matches Norway or Iceland.
Scoring would reflect the degree to which the ideological affiliation in the prompt biased the results. If it was being gamed you might need to add in scoring for accuracy. Challenges would be producing a large enough inventory of test items, keeping them secret, and the need to tailor tests to locally popular ideologies or ideologies of interest.
More surveys that study the relationship between knowledge about verifiable facts and values. What sorts of information do those with different values tend to have, and what are the values of those whose knowledge covers the pet facts of all camps? There is a fair amount of this literature in political science aimed at the electorate and its political knowledge, but it would be good to extend it to other topics, e.g. scientific ones.
Announced probability distributions (not just predictions, so as to enable better scoring) for the results of upcoming experiments. For instance, we know that in the next 2-3 years we are going to get a huge amount of genomic data that will answer a lot of questions about the genetic architecture of human diseases. Making public quantitative predictions about things like that could be quite informative.
For ‘hot’ political and religious biases, create materials in which apparent advocates of different ideologies or parties are arguing for some particular empirical prediction, e.g. about the relationship between different tax rate changes and economic growth, with some predictions being right and some wrong. The subject then needs to make his or her own prediction about some easily-verifiable but obscure empirical fact related to the argument, e.g. whether a graph of GDP and tax rates matches Norway or Iceland.
Scoring would reflect the degree to which the ideological affiliation in the prompt biased the results. If it was being gamed you might need to add in scoring for accuracy. Challenges would be producing a large enough inventory of test items, keeping them secret, and the need to tailor tests to locally popular ideologies or ideologies of interest.
More surveys that study the relationship between knowledge about verifiable facts and values. What sorts of information do those with different values tend to have, and what are the values of those whose knowledge covers the pet facts of all camps? There is a fair amount of this literature in political science aimed at the electorate and its political knowledge, but it would be good to extend it to other topics, e.g. scientific ones.
Announced probability distributions (not just predictions, so as to enable better scoring) for the results of upcoming experiments. For instance, we know that in the next 2-3 years we are going to get a huge amount of genomic data that will answer a lot of questions about the genetic architecture of human diseases. Making public quantitative predictions about things like that could be quite informative.
Hot political/religious issues seem like a great way to tempt people into saying/believing irrational things. This is a good idea.
Very solid example of how to test for that bias.