There’s a whole bunch of information out there—literally more than any one person could/cares to know—and we simply don’t have the time (or often the background) to fully understand certain fields and more importantly to evaluate which claims are true and which aren’t.
In other words, reality is objective and claims should be evaluated based on their evidence, not the person who proposes them.
It would seem to me that these claims aren’t consistent. I agree with the first claim, not with the second. It’s true that experts’ claims are objectively and directly verifiable, but lots of the time checking that direct evidence is not an optimal use of our time. Instead we’re better off deferring to experts (which we actually also do, as you say, on a massive scale).
I think we are in agreement but my second statement didn’t have the caveats it should have; I doubt you would disagree with the first half, that reality is objective. You disagreed with the second half, that claims should be evaluated based on evidence—not because it’s a false statement, but rather that, in practice, we cannot reasonably be expected to do this for every claim we encounter. I agree. The unstated caveat is that we should trust the experts until there is a reason to think that their claims are poorly founded, i.e. they have demonstrated bias in their work or there is a lack of consensus among experts in a similar field.
I guess the main thing I am trying to say that directly ties into your post is that we shouldn’t really care how someone formed their beliefs when evaluating the veracity of a claim; when we should care is:
I don’t agree with that. We use others’ statements as a source of evidence on a massive scale (i.e. we defer to them. Indeed, experiments show that we do this automatically. But if these statements express beliefs that were produced by unreliable processes—e.g. bias—then that’s clearly not a good strategy. Hence we should care very much of whether someone is biased when evaluating the veracity of many claims, for that reason.
Hold on now, you did read my bullets right?
When we should care is:
When we suspect that a bias may have lead to a false reporting of real information (in which case we would want independent, unbiased research/reporting)
Notice that I actually did say suspicion of bias is an exception to the “not caring” statement. In other words, unless we have a reason to suspect a bias, (and/or the second bullet) then we probably won’t care. There can be other ways of bad conclusions being drawn; the reason I mention bias is because it is systematic. If we see a trend of a particular person systematically coming to poor conclusions, whatever their reason, then our confidence in their input would fall. On the other hand, experts are human and can make mistakes as well—we should not dismiss someone for being wrong once but for being systematically wrong and unwilling to fix the problem. If we really care about high confidence in something, for instance in the cases where the truth of the claim is important to a lot of people and we want to avoid being mislead if there are a few biased opinions, we seek the consensus.
Now, can we get the consensus all of the time? Unfortunately not. Not even most of the time. So what’s our next line of defense? Well, one of them is journalistic integrity; frankly I don’t even want to go there, but if done properly there are people whose job it is to sort through these very things—but really let’s not go there for now. The last line of defense is yourself and the actual work of checking on things yourself.
If a claim is important enough for you to really care whether or not it’s accurate, then you have to be willing to do a little bit of digging yourself. Now I realize that the entire point of this post was to avoid just that thing and to have computers do it automagically; but really, if it is important enough for you to check on it yourself, rather than just trusting your regular sources of information, then would you be willing not to check just because a program said that this guy was unbiased?
That might be a bit of an unfair characterization of what you’re discussing, but there is a distinction to be made between using online behavior to measure/understand the general population’s belief structure and to check for bias in expert opinions.
I think the idea of understanding the population’s belief structures would still be extremely useful in it’s own right though, per my second bullet in the exceptions to the “don’t care” statement—particularly if someone wants to change a lot of people’s minds about something. If you have a campaign (be it political or social), then understanding how people have structured their beliefs would give you a road map for how best to go about changing them in the way you want. To some extent, this is how it’s already been done historically, but it was not done via raw data analysis.
I feel that this discussion is getting a bit too multifarious, which no doubt has to do with the very abstract nature of my post. I’m not very happy with it. I should probably have started with more comprehensive and clear examples than an abstract and general discussion like this. Anyway, I do intend to give more examples of reverse-engineering-of-belief-structures-examples in the future. Hopefully that’ll make it clearer what I’m trying to do. Here’s one example of reverse engineering-reasoning I’ve already given.
I agree that lots of the time we should “do a bit of digging ourselves”; i.e. look at the direct evidence for P rather than on whether those telling us P or not-P are reliable or not. But I also claim that in many cases deference is extremely cost-efficient and useful. You seem to agree with this—good.
...but there is a distinction to be made between using online behavior to measure/understand the general population’s belief structure and to check for bias in expert opinions.
Sure. But reverse engineering reasoning can also be used to infer expert bias (as shown in this post).
To some extent, this is how it’s already been done historically, but it was not done via raw data analysis.
Yes. People already perform this kind of reverse engineering reasoning, as I said (cf my reference to Marx). What I want to do is to do it more systematically and efficiently.
I think we are in agreement but my second statement didn’t have the caveats it should have; I doubt you would disagree with the first half, that reality is objective. You disagreed with the second half, that claims should be evaluated based on evidence—not because it’s a false statement, but rather that, in practice, we cannot reasonably be expected to do this for every claim we encounter. I agree. The unstated caveat is that we should trust the experts until there is a reason to think that their claims are poorly founded, i.e. they have demonstrated bias in their work or there is a lack of consensus among experts in a similar field.
Hold on now, you did read my bullets right? When we should care is:
When we suspect that a bias may have lead to a false reporting of real information (in which case we would want independent, unbiased research/reporting)
Notice that I actually did say suspicion of bias is an exception to the “not caring” statement. In other words, unless we have a reason to suspect a bias, (and/or the second bullet) then we probably won’t care. There can be other ways of bad conclusions being drawn; the reason I mention bias is because it is systematic. If we see a trend of a particular person systematically coming to poor conclusions, whatever their reason, then our confidence in their input would fall. On the other hand, experts are human and can make mistakes as well—we should not dismiss someone for being wrong once but for being systematically wrong and unwilling to fix the problem. If we really care about high confidence in something, for instance in the cases where the truth of the claim is important to a lot of people and we want to avoid being mislead if there are a few biased opinions, we seek the consensus.
Now, can we get the consensus all of the time? Unfortunately not. Not even most of the time. So what’s our next line of defense? Well, one of them is journalistic integrity; frankly I don’t even want to go there, but if done properly there are people whose job it is to sort through these very things—but really let’s not go there for now. The last line of defense is yourself and the actual work of checking on things yourself.
If a claim is important enough for you to really care whether or not it’s accurate, then you have to be willing to do a little bit of digging yourself. Now I realize that the entire point of this post was to avoid just that thing and to have computers do it automagically; but really, if it is important enough for you to check on it yourself, rather than just trusting your regular sources of information, then would you be willing not to check just because a program said that this guy was unbiased?
That might be a bit of an unfair characterization of what you’re discussing, but there is a distinction to be made between using online behavior to measure/understand the general population’s belief structure and to check for bias in expert opinions.
I think the idea of understanding the population’s belief structures would still be extremely useful in it’s own right though, per my second bullet in the exceptions to the “don’t care” statement—particularly if someone wants to change a lot of people’s minds about something. If you have a campaign (be it political or social), then understanding how people have structured their beliefs would give you a road map for how best to go about changing them in the way you want. To some extent, this is how it’s already been done historically, but it was not done via raw data analysis.
I feel that this discussion is getting a bit too multifarious, which no doubt has to do with the very abstract nature of my post. I’m not very happy with it. I should probably have started with more comprehensive and clear examples than an abstract and general discussion like this. Anyway, I do intend to give more examples of reverse-engineering-of-belief-structures-examples in the future. Hopefully that’ll make it clearer what I’m trying to do. Here’s one example of reverse engineering-reasoning I’ve already given.
I agree that lots of the time we should “do a bit of digging ourselves”; i.e. look at the direct evidence for P rather than on whether those telling us P or not-P are reliable or not. But I also claim that in many cases deference is extremely cost-efficient and useful. You seem to agree with this—good.
Sure. But reverse engineering reasoning can also be used to infer expert bias (as shown in this post).
Yes. People already perform this kind of reverse engineering reasoning, as I said (cf my reference to Marx). What I want to do is to do it more systematically and efficiently.