In general, I think taking Yvain’s advice is probably better than not.
If the group of (otherwise) intelligent people is a group of anosognosiacs that tell you that they aren’t disabled, then their common bias is probably sufficient to dismiss them. In most cases, however, we don’t have the line by line code that produces bad answers.
If you can’t look under the hood and say “The car doesn’t work because part x is doing y instead of z”, but rather “I’m not sure exactly how this thing is supposed to work, but something is leaking”, then you can’t be sure it isn’t going to do something right, even if its performance is suboptimal. There could be a hint of rationality buried in the muck, and in that case, they might manage to get it right.
One example of this is the “magical collapse” deal in QM. It may sound absurd (because it is), but the physicists talking about it aren’t stupid, and will still manage to give correct predictions. If you dismissed it as “entirely irrational” before looking at the evidence, you’d be stuck with classical physics.
If you then consider the fact that people tend to exaggerate other peoples biases (it’s always the other guy that is biased, right?), the case for at least putting some thought into it gets stronger.
One example of this is the “magical collapse” deal in QM. It may sound absurd (because it is)
Which alternative seems more absurd to you can depend a lot on what else you know or think you know. Many worlds seemed far more absurd to me than collapse until someone properly explained it to me.
In general, I think taking Yvain’s advice is probably better than not.
If the group of (otherwise) intelligent people is a group of anosognosiacs that tell you that they aren’t disabled, then their common bias is probably sufficient to dismiss them. In most cases, however, we don’t have the line by line code that produces bad answers.
If you can’t look under the hood and say “The car doesn’t work because part x is doing y instead of z”, but rather “I’m not sure exactly how this thing is supposed to work, but something is leaking”, then you can’t be sure it isn’t going to do something right, even if its performance is suboptimal. There could be a hint of rationality buried in the muck, and in that case, they might manage to get it right.
One example of this is the “magical collapse” deal in QM. It may sound absurd (because it is), but the physicists talking about it aren’t stupid, and will still manage to give correct predictions. If you dismissed it as “entirely irrational” before looking at the evidence, you’d be stuck with classical physics.
If you then consider the fact that people tend to exaggerate other peoples biases (it’s always the other guy that is biased, right?), the case for at least putting some thought into it gets stronger.
Which alternative seems more absurd to you can depend a lot on what else you know or think you know. Many worlds seemed far more absurd to me than collapse until someone properly explained it to me.