Atheism doesn’t get 80% support among philosophers, and most philosophers of religion reject it because of a selection effect where few wish to study what they believe to be non-subjects (just as normative and applied ethicists are more likely to reject anti-realism).
Perhaps we shouldn’t look for professional consensus on things we accept with almost-certainty, because things that can be correctly accepted with almost-certainty by amateurs will not be professionally studied, except by people who are systematically confused. Instead, we should ask non-professional opinion of people who are in the position to know most about the subject, but don’t study it professionally.
You are correct that it is reasonable to assign high confidence to atheism even if it doesn’t have 80% support, but we must be very careful here. Atheism is presumably the strongest example of such a claim here on Less Wrong (i.e. one which you can tell a great story why so many intelligent people would disagree etc and hold a high confidence in the face of disagreement). However, this does not mean that we can say that any other given view is just like atheism in this respect and thus hold beliefs in the face of expert disagreement, that would be far too convenient.
Strong agreement about not overgeneralizing. It does appear, however, that libertarianism about free well, non-physicalism about the mind, and a number of sorts of moral realism form a cluster, sharing the feature of reifying certain concepts in our cognitive algorithms even when they can be ‘explained away.’ Maybe we can discuss this tomorrow night.
However, this does not mean that we can say that any other given view is just like atheism in this respect and thus hold beliefs in the face of expert disagreement, that would be far too convenient.
Of course not; the substance of one’s reasons for disagreeing matters greatly. In this case, I suspect there’s probably a significant amount of correlation/non-independence between the reasons for believing atheism and believing something like moral non-realism.
One thing we should take away from cases like atheism is that surveys probably shouldn’t be interpreted naively, but rather as somewhat noisy information. I think my own heuristic (on binary questions where I already have a strong opinion) is basically to look on which side of 50% my position falls; if the majority agrees with me (or, say, the average confidence in my position is over 50%), I tend to regard that as (more) evidence in my favor, with the strength increasing as the percentage increases.
(This, I think, would be part of how I would answer Yvain.)
I think the arguments you’re developing here go a long way towards answering Toby’s point, but what safeguards can we use to ensure we can’t use it as a generalized anti-expert defence?
The prerequisite for this heuristic is coming to a conclusion with near-certainty on an amateur level. The safeguard has to be general ability to not get that much unjustified overconfidence.
I’m pointing out that there is already a generally applicable enough set of safeguards that covers this case in particular, adequate or not. That is, this heuristic doesn’t automatically lead as astray.
I don’t think I can understand you properly; it reads like you’re saying that we can be confident in rejecting expert advice if we’ve already reached a contrary position with high confidence. That doesn’t sound Bayesian. I suspect the error is mine but I’d appreciate your help in finding and fixing it!
EDIT: I [not Vladimir] would say that if we have one position that we can be confident in (atheism) we can use it as an indicator of expert quality, and pay more attention to those experts on other issues (e.g. moral realism as philosophers define it).
And with respect to the selection effect among philosophers of religion, there’s overwhelming direct evidence on this in the form of the Catholic Church push on this front.
I would say so too, though I wasn’t saying that here. It is the mechanism through which we can reject expert opinion, but also as applied to the very claim that is being contested, not just the other slam-dunk claims.
Only where there’s a relationship of course. We would be unwise to reject medical expertise from a body where atheists were few, unless religion impinged on that advice eg abortion, cryonics. Here a relationship with religion is clear.
I would say that if on some matter of medical controversy atheist doctors and medical academics tended to come out one way, while the median opinion came out the other way, we should go with the atheist medical opinion, ceteris paribus. Atheism is a proxy for intelligence and scientific thinking, a finding which has a mountain of evidence in its favor.
Definitely if the majority opinion among atheist experts differed from the majority opinion among all experts, I’d go for the former, but if say the majority of doctors studying a disease were Catholic for simple geographic reasons, I’d still defer to their expertise.
(This discussion is about meta-level mechanism for agreement, where you accept a conclusion; experts might well have persuasive arguments that inverse one’s confidence.)
Atheism doesn’t get 80% support among philosophers, and most philosophers of religion reject it because of a selection effect where few wish to study what they believe to be non-subjects (just as normative and applied ethicists are more likely to reject anti-realism).
Perhaps we shouldn’t look for professional consensus on things we accept with almost-certainty, because things that can be correctly accepted with almost-certainty by amateurs will not be professionally studied, except by people who are systematically confused. Instead, we should ask non-professional opinion of people who are in the position to know most about the subject, but don’t study it professionally.
You are correct that it is reasonable to assign high confidence to atheism even if it doesn’t have 80% support, but we must be very careful here. Atheism is presumably the strongest example of such a claim here on Less Wrong (i.e. one which you can tell a great story why so many intelligent people would disagree etc and hold a high confidence in the face of disagreement). However, this does not mean that we can say that any other given view is just like atheism in this respect and thus hold beliefs in the face of expert disagreement, that would be far too convenient.
Strong agreement about not overgeneralizing. It does appear, however, that libertarianism about free well, non-physicalism about the mind, and a number of sorts of moral realism form a cluster, sharing the feature of reifying certain concepts in our cognitive algorithms even when they can be ‘explained away.’ Maybe we can discuss this tomorrow night.
Of course not; the substance of one’s reasons for disagreeing matters greatly. In this case, I suspect there’s probably a significant amount of correlation/non-independence between the reasons for believing atheism and believing something like moral non-realism.
One thing we should take away from cases like atheism is that surveys probably shouldn’t be interpreted naively, but rather as somewhat noisy information. I think my own heuristic (on binary questions where I already have a strong opinion) is basically to look on which side of 50% my position falls; if the majority agrees with me (or, say, the average confidence in my position is over 50%), I tend to regard that as (more) evidence in my favor, with the strength increasing as the percentage increases.
(This, I think, would be part of how I would answer Yvain.)
I think the arguments you’re developing here go a long way towards answering Toby’s point, but what safeguards can we use to ensure we can’t use it as a generalized anti-expert defence?
The prerequisite for this heuristic is coming to a conclusion with near-certainty on an amateur level. The safeguard has to be general ability to not get that much unjustified overconfidence.
Are you proposing a safeguard here or setting out what the safeguard has to achieve?
I’m pointing out that there is already a generally applicable enough set of safeguards that covers this case in particular, adequate or not. That is, this heuristic doesn’t automatically lead as astray.
I don’t think I can understand you properly; it reads like you’re saying that we can be confident in rejecting expert advice if we’ve already reached a contrary position with high confidence. That doesn’t sound Bayesian. I suspect the error is mine but I’d appreciate your help in finding and fixing it!
EDIT: I [not Vladimir] would say that if we have one position that we can be confident in (atheism) we can use it as an indicator of expert quality, and pay more attention to those experts on other issues (e.g. moral realism as philosophers define it).
And with respect to the selection effect among philosophers of religion, there’s overwhelming direct evidence on this in the form of the Catholic Church push on this front.
Re: correction:
I would say so too, though I wasn’t saying that here. It is the mechanism through which we can reject expert opinion, but also as applied to the very claim that is being contested, not just the other slam-dunk claims.
Only where there’s a relationship of course. We would be unwise to reject medical expertise from a body where atheists were few, unless religion impinged on that advice eg abortion, cryonics. Here a relationship with religion is clear.
I would say that if on some matter of medical controversy atheist doctors and medical academics tended to come out one way, while the median opinion came out the other way, we should go with the atheist medical opinion, ceteris paribus. Atheism is a proxy for intelligence and scientific thinking, a finding which has a mountain of evidence in its favor.
Definitely if the majority opinion among atheist experts differed from the majority opinion among all experts, I’d go for the former, but if say the majority of doctors studying a disease were Catholic for simple geographic reasons, I’d still defer to their expertise.
I agree with this interpretation.
Zack is making basically the same point here.
(This discussion is about meta-level mechanism for agreement, where you accept a conclusion; experts might well have persuasive arguments that inverse one’s confidence.)
(cf. Argument Screens Off Authority.)