It is not always possible to measure and calibrate expertise. Even when the opportunity exists, few professional disciplines make use of it. Very few experts have had any formal training or calibration for the provision of judgements.
That should change. Perhaps a rationality startup could emerge to provide professional development across a range of professions. Until then, it’s important to know who to trust, how to trust them and when to trust them.
To the lay person, graphs are intimidating. Atmospheric science is notoriously complex. Expert judgement is a ‘next best’ option, then perhaps what is socially normative and marketable.
To the lay person, how can expert judgement be interpreted? Who even gets counted as an expert. We frequently hear about a ‘scientific consensus’ but also here from seemingly erudite ‘skeptics’, who use graphs that are compelling but uninterpretable in the broader context of all the information around.
It looks like the naive algorithm for evaluating evidence, while a naive Bayesian conclusion, is not particularly efficient in some important cases:
…the claim of expert status is evaluated based on the professional characteristics and track record of the person. Their qualifications, experience, publications and professional standing are relevant. In the latter case, expert status may be evaluated by testing an expert’s judgements against independent evidence, matching predictions with outcomes observed subsequently – ACERA
This has real world consequences:
In Australian Federal courts, opinion evidence is admissible if it assists ‘…the trier of fact in understanding the testimony, or determining a fact in issue’ (ALRC 1985; p. 739-740). Scientific validity is established through falsifiability, peer review, acknowledged error rates, general acceptance of ideas and valid methods (Preston 2003). Science is presumed to act as a check on bias or prejudice. For instance, Cooke (1991) argued that if individuals follow the scientific method scrupulously, then they may arrive at results that have a claim on rationality and impartiality. The same thinking permeates current views of expert judgments in risk analysis. In this review, we will explore the view that scientists are advocates of a particular scientific view and spend their time trying to convince others of their position. In adversarial legal systems, potential expert witnesses are selected overwhelmingly for their credentials and for the strength of their support for the lawyer’s viewpoint (Shuman et al. 1993; in Freckelton 1995). Lawyers search for appropriate attributes in an expert and develop strategies to maximize their chance of winning a case. Success often depends on the plausibility or self-confidence of the expert, rather than the expert’s professional competence (ALRC 2000) ACERA
I feel climate change is a good example of a thing which is allegedly highly important but extremely complex, where deferral to experts is probably prudent. However, knowing how to relate to expert evidence is then important, particularly if you, like me, are unsure about what to do with all the expertise floating around awhile action is sluggish, prompting me to wonder – what’s going on here?
So why is a structured approach to expert interpretation useful in general. Let my friends from ACERA tell it
When expert opinion is required, ideally, there would be a pool of people available, appropriately trained, with extensive experience and sound normative skills. However, this pool is sometimes small or non-existent. Often, it is composed of people with conflicting opinions, values and motivations. The positive qualities of experts noted above have limitations that are often overlooked. Expertise is limited to technical domains that are narrower than most practical problems. Expert performance does not transfer to other disciplines, even those that seem quite similar. As we will see below, most experts are overconfident. It can be difficult to have a clear picture of what an expert knows, because experts themselves have difficulty knowing the limits of their own expertise (Freudenburg 1999, Ayyab 1999). Furthermore, the self-serving nature of judgments may bias advice (Krinitzsky 1993). This section outlines definitions of experts in several fields and evaluates the importance of defining expertise.
After doing some research, I’ve come up with a few notes. They are not in my own words, because they are written about adequately by others:
Who is an expert?
According to Hart (1986), attributes that characterize an expert include effectiveness (they use knowledge to solve problems with an acceptable rate of success), efficiency (they solve problems quickly) and awareness of limitations (they are willing to say when they cannot solve a problem). Other critical questions may include (Hart 1986, Walton 1997): Is the expert credible? Are they personally reliable? Do they have a special or conflicting interest? Is their assertion consistent with other experts and with their prior advice? What evidence is their opinion based on? How much experience do they have in the domain of the question at hand?
Studies of experts and expertise provide a few generalizations (see Feltovitch et al. 2006). Experts organise knowledge effectively, have superior recall of information and have improved abilities to abstract knowledge to new situations, compared to lay people. They perform the basic operations of their discipline efficiently, and are able to think critically about data and methods in their domain. Usually, attaining expertise requires both study and practical experience.
-ACERA
Should we trust experts?
Among experts in ecology: ‘No consistent relationships were observed between performance and years of experience, publication record or self-assessment of expertise.’. >Expert responses were found to be overconfident, specifying 80% confidence intervals that captured the truth only 49–65% of the time. In contrast, student 80% intervals captured the truth 76% of the time, displaying close to perfect calibration. Best estimates from experts were on average more accurate than those from students. The best students outperformed the worst experts.
Expert status is a poor guide to good performance. In the absence of training and information on past performance, simple averages of expert responses provide a robust counter to individual variation in performance.
-ACERA
[deleted] comments on Open thread, December 7-13, 2015