however, namely a political bias test which I have constructed with ClearerThinking, run by EA-member Spencer Greenberg.
I just took the test. Looking at the official answers, I have to say that the test probably says more about the test makers’ bias then my own. For example, a number of the answers cite “scientific consensus”, which is a rather dubious concept philosophically especially in areas like the global warming and GMOs where there is reasonable suspicion that the “consensus” is politically manufactured, or even worse “economic consensus”, a.k.a., the economists we cherry-picked all agree.
It doesn’t help that some of the economics questions are ambiguous: “Did the Obama administration’s 2009 stimulus package reduce or increase unemployment?” are we including the effects on the economy of borrowing the money to pay for the stimulus?
Another example is the World Giving Index. While the answer, that the US gives more then European states is probably true, the fact that the index has the US tied with Myanmar is extremely strong evidence that the index is BS.
Another example is the World Giving Index. While the answer, that the US gives more then European states is probably true, the fact that the index has the US tied with Myanmar is extremely strong evidence that the index is BS.
Whether or not the index produces that effect seems to be a fairly objective question.
If conversatives get this right but biased liberals get it wrong, this indeed shows bias.
Analogy: I describe a real-life situation of a police officer shooting a suspect. I then ask people what they think the race of the police officer and suspect are. Because I am referring to a specific real-life case, the question has a single, factual, answer, and people’s answer is either correct or not.
Yet I can manipulate the question to show liberal bias or conservative bias, my choice, simply by which case I choose to ask the question about.
The best way to ask that question to legitimately detect bias would be to choose a typical case, and to assume that people who haven’t heard of the specific case would answer depending on the facts about typical cases.
And in this situation, a typical case would be an index of X that accurately measures X. Choosing an index that doesn’t accurately measure X would skew the ability to use that question to detect bias, since I expect that unbiased people who haven’t heard of the index in question would answer based on an accurate measure of X.
That’s an interesting one—I think black people are disproportionately at risk of being killed by the police in the US, but about as many white people as black get killed.
TLDR, the reports are based on self-reporting, the number of people giving is weighted as heavily as the amount given, and giving to religious charities (like Buddhist monks) counts, but yes, Myanmar has a lot of people giving money to other people.
|the fact that the index has the US tied with Myanmar is extremely strong evidence that the index is BS.
The question isn’t whether the index is BS but what signal the judgement communicates. You don’t learn that by investigating the index in detail but in seeing correlations between the answer to the question and other answers.
I just took the test. Looking at the official answers, I have to say that the test probably says more about the test makers’ bias then my own. For example, a number of the answers cite “scientific consensus”, which is a rather dubious concept philosophically especially in areas like the global warming and GMOs where there is reasonable suspicion that the “consensus” is politically manufactured, or even worse “economic consensus”, a.k.a., the economists we cherry-picked all agree.
It doesn’t help that some of the economics questions are ambiguous: “Did the Obama administration’s 2009 stimulus package reduce or increase unemployment?” are we including the effects on the economy of borrowing the money to pay for the stimulus?
Another example is the World Giving Index. While the answer, that the US gives more then European states is probably true, the fact that the index has the US tied with Myanmar is extremely strong evidence that the index is BS.
Whether or not the index produces that effect seems to be a fairly objective question. If conversatives get this right but biased liberals get it wrong, this indeed shows bias.
Analogy: I describe a real-life situation of a police officer shooting a suspect. I then ask people what they think the race of the police officer and suspect are. Because I am referring to a specific real-life case, the question has a single, factual, answer, and people’s answer is either correct or not.
Yet I can manipulate the question to show liberal bias or conservative bias, my choice, simply by which case I choose to ask the question about.
The best way to ask that question to legitimately detect bias would be to choose a typical case, and to assume that people who haven’t heard of the specific case would answer depending on the facts about typical cases.
And in this situation, a typical case would be an index of X that accurately measures X. Choosing an index that doesn’t accurately measure X would skew the ability to use that question to detect bias, since I expect that unbiased people who haven’t heard of the index in question would answer based on an accurate measure of X.
That’s an interesting one—I think black people are disproportionately at risk of being killed by the police in the US, but about as many white people as black get killed.
|the fact that the index has the US tied with Myanmar is extremely strong evidence that the index is BS.
Here’s a bit more information; alternatively, you can read the report yourself.
TLDR, the reports are based on self-reporting, the number of people giving is weighted as heavily as the amount given, and giving to religious charities (like Buddhist monks) counts, but yes, Myanmar has a lot of people giving money to other people.
The question isn’t whether the index is BS but what signal the judgement communicates. You don’t learn that by investigating the index in detail but in seeing correlations between the answer to the question and other answers.