Partial review / thoughts / summaries of “Psychology of Intelligence Analysis” (Work in progress)
This book generally reads as if a CIA analyst wrote “Thinking Fast and Slow” with “CIA analysts” as a target audience (although written in 1999, a decade earlier), Mostly it’s arguing that the CIA should take cognitive biases and other intelligence failure modes seriously, and implement study and training to improve the situations. He has some overall suggestions on how to go about that which I didn’t find very surprising.
Once an experienced analyst has the minimum information necessary to make an informed judgment, obtaining additional information generally does not improve the accuracy of his or her estimates. Additional information does, however, lead the analyst to become more confident in the judgment, to the point of overconfidence
Experienced analysts have an imperfect understanding of what information they actually use in making judgments. They are unaware of the extent to which their judgments are determined by a few dominant factors, rather than by the systematic integration of all available information. Analysts actually use much less of the available information than they think they do
Example Experiment: How many variables are relevant to betting on horses?
Eight experienced horserace handicappers were shown a list of 88 variables found on a typical horse-past-performance chart. Each handicapper identified the 5 most important items of information—those he would wish to use to handicap a race if he were limited to only five items of information per horse. Each was then asked to select the 10, 20, and 40 most important variables he would use if limited to those levels of information.
At this point, the handicappers were given true data (sterilized so that horses and actual races could not be identified) for 40 past races and were asked to rank the top five horses in each race in order of expected finish. Each handicapper was given the data in increments of the 5, 10, 20 and 40 variables he had judged to be most useful. Thus, he predicted each race four times—once with each of the four different levels of information. For each prediction, each handicapper assigned a value from 0 to 100 percent to indicate degree of confidence in the accuracy of his prediction.
When the handicappers’ predictions were compared with the actual outcomes of these 40 races, it was clear that average accuracy of predictions remained the same regardless of how much information the handicappers had available.
3 of the handicappers showed less accuracy as the amount of information increased
2 improved their accuracy
3 were unchanged.
All, however, expressed steadily increasing confidence in their judgments as more information was received. This relationship between amount of information, accuracy of the handi-
The same relationships among amount of information, accuracy, and analyst confidence have been confirmed by similar experiments in other fields. (footnote claims a list of references available in Lewis R. Goldberg’s “Simple Models or Simple Processes? Some Research on Clinical Judgments”
Partial review / thoughts / summaries of “Psychology of Intelligence Analysis” (Work in progress)
This book generally reads as if a CIA analyst wrote “Thinking Fast and Slow” with “CIA analysts” as a target audience (although written in 1999, a decade earlier), Mostly it’s arguing that the CIA should take cognitive biases and other intelligence failure modes seriously, and implement study and training to improve the situations. He has some overall suggestions on how to go about that which I didn’t find very surprising.
This web page is an overall review/summary (about 1.5 pages long) which I recommend reading if you want an overall sense of the book.
In subsequent comments here, I’ll be jumping around to more concrete empirical claims that I could find.
Chapter 5: Do you really need more information?
Example Experiment: How many variables are relevant to betting on horses?