One problem with trusting the experts rather than trying to think things through for yourself is that you need a certain amount of expertise just to understand what the experts are saying. The experts might be able to tell you that “all symmetric matrices are orthonormally diagonalizable,” and you might have perfect trust in them, but without a lot of personal study and inquiry, the mere words don’t help you very much.
That doesn’t matter if the expert can say “hire this guy”, “invest in this company”, “vote for this guy”, or “donate to this charity”. If you’re doing some sort of complicated action with careful integration of expert advice, then it’s probably worthwhile becoming at least a semi-expert yourself.
Experts don’t just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don’t understand the underlying analysis, so long as we have picked good experts to rely on.
Experts don’t just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don’t understand the underlying analysis, so long as we have picked good experts to rely on.
There is a key right there. Ability in rational thinking and understanding of common biasses can drastically impact who we consider as a good expert. The most obvious examples are ‘experts’ in medicine and economics. I suggest that the most influential experts in those fields are not those with the most accurate understanding.
Rationalist training could be expected to improve our judgement when choosing experts.
True. But it is still easier in many cases to pick good experts than to independently assess the validity of expert conclusions. So we might make more overall epistemic advances by a twin focus: (1) Disseminate the techniques for selecting reliable experts, and (2) Design, implement and operate institutions that are better at finding the truth.
Note also that your concern can also be addressed as one subset of institutional design questions: How should we reform fields such as medicine or economics so that influence will better track true expertise?
But there are simply far too many areas of life involving putative “orthonormally diagonalizable matrices” for any one individual to be able to rationally investigate. At some point you have to take someone’s word for it; so rather than taking one expert’s word, you’re likely better off trusting a community of experts. A current example might be with global warming—most scientists seem to feel it’s a major issue.
Unfortunately, though, radical changes in thinking come usually come from the margin, e.g., Galileo. The hard part, it seems to me, is to distinguish between mere status quo convention and genuine expert agreement.
One problem with trusting the experts rather than trying to think things through for yourself is that you need a certain amount of expertise just to understand what the experts are saying. The experts might be able to tell you that “all symmetric matrices are orthonormally diagonalizable,” and you might have perfect trust in them, but without a lot of personal study and inquiry, the mere words don’t help you very much.
That doesn’t matter if the expert can say “hire this guy”, “invest in this company”, “vote for this guy”, or “donate to this charity”. If you’re doing some sort of complicated action with careful integration of expert advice, then it’s probably worthwhile becoming at least a semi-expert yourself.
All the worse if you are convinced that God hates diagonalizable matrices, and so you prefer not to believe the heathen.
Experts don’t just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don’t understand the underlying analysis, so long as we have picked good experts to rely on.
There is a key right there. Ability in rational thinking and understanding of common biasses can drastically impact who we consider as a good expert. The most obvious examples are ‘experts’ in medicine and economics. I suggest that the most influential experts in those fields are not those with the most accurate understanding.
Rationalist training could be expected to improve our judgement when choosing experts.
True. But it is still easier in many cases to pick good experts than to independently assess the validity of expert conclusions. So we might make more overall epistemic advances by a twin focus: (1) Disseminate the techniques for selecting reliable experts, and (2) Design, implement and operate institutions that are better at finding the truth.
Note also that your concern can also be addressed as one subset of institutional design questions: How should we reform fields such as medicine or economics so that influence will better track true expertise?
and if an expert says “all matrices are orthonormally diagonalizable”, it sounds equally impressive, but it is false as false can be.
But there are simply far too many areas of life involving putative “orthonormally diagonalizable matrices” for any one individual to be able to rationally investigate. At some point you have to take someone’s word for it; so rather than taking one expert’s word, you’re likely better off trusting a community of experts. A current example might be with global warming—most scientists seem to feel it’s a major issue.
Unfortunately, though, radical changes in thinking come usually come from the margin, e.g., Galileo. The hard part, it seems to me, is to distinguish between mere status quo convention and genuine expert agreement.
Without the study, you wouldn’t have a basis for understanding? (grin/duck/run)