There is a very dangerous way in which conjunction fallacy can be exploited. One can present you with 2..5 detailed, very plausible failure scenarios whose probabilities are shown to be very low, using solid mathematics; then if you suffer from conjunction fallacy, it will look like this implies high safety of a design—while in fact it’s the detailedness of the scenario that makes probability so low.
Even if you realize that there may be many other scenarios that were not presented to you, you still have an incredibly low probability number on a highly plausible (“most likely”) failure scenario, which you, being unaware of the powers of conjunction, attribute to safety of the design.
The conjunction fallacy can be viewed as poor understanding of relation between plausibility and probability. Addition of extra details doesn’t make scenario seem less plausible (it can even increase plausibility), but does mathematically make it less probable.
Details:
What happens if a risk assessment is being prepared for (and possibly by) sufferers of conjunction fallacy?
Detailed example scenarios will be chosen, such as:
A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.
Then as a risk estimate, you multiply probability of Russian invasion of Poland, by probability of it resulting in suspension of diplomatic relations between US and SU, and multiply by probability of it happening specifically in 1983 . The resulting probability could be extremely small for sufficiently detailed scenario (you can add the polish prime minister being assassinated if your probability is still too high for comfort).
To a sufferer of conjunction fallacy it looks like a very plausible, ‘most likely’ scenario has been shown highly improbable, and thus the risks are low. The sufferer of conjunction fallacy does not expect that this probability could be very low in unsafe design.
It seems to me that the risk assessment is routinely done in such a fashion. Consider Space Shuttle’s reliability, or the NRC cost-benefit analyses for the spent fuel pools , which goes as low as one in 45 millions years for the most severe scenario. (Same seem to happen in all of the NRC resolutions, to varying extent; feel free to dig through)
Those reports looked outright insane to me—a very small number of highly detailed scenarios are shown to be extremely improbable—how in the world would anyone think that this implies safety? How in the world can anyone take seriously one in 45 million years scenario? That’s near the point where a meteorite impact leads to social disorder that leads to the fuel pool running dry!
I couldn’t understand that. Detailed scenarios are inherently unlikely to happen whenever the design is safe or not; their unlikehood is a property of their detailedness, not of safety or unsafety of design.
Until it clicked that if you read those through the goggles of conjunction fallacy, it is what looks like the most likely failure modes that are shown to be incredibly improbable. Previously (before reading lesswrong) I didn’t really understand how exactly anyone buys into this sort of stuff, and could find no way to even argue. You can’t quite talk someone out of something when you don’t understand how they believe in it. You say “there may be many scenarios that were not considered”, and they know that already.
This is one seriously dangerous way in which conjunction fallacy can be exploited. It seems to be rather common in risk analysis.
Note: I do think that the conjunction fallacy is responsible for much of the credibility given to such risk estimates; no-one seem to seriously believe that NRC always covers all the possible scenarios, yet at same time there seem to be a significant misunderstanding of the magnitude of the problem; the NRC risk estimates are taken as within the ballpark of the correct value in the cost-benefit analysis for the safety features. For nuclear power, widespread promotion of results of such analyses results in massive loss of public trust once an accident happens, and consequently to narrowing of available options and transition to less desirable energy sources (coal in particular), which in itself is a massive dis-utility.
[The other issue in linked NRC study is of course that the cost-benefit analysis had used internal probability when it should have used external probability.]
edit: minor clarifying
edits: improved the abstract and clarified the article further based on the comments.
Conjunction fallacy and probabilistic risk assessment.
Summary:
There is a very dangerous way in which conjunction fallacy can be exploited. One can present you with 2..5 detailed, very plausible failure scenarios whose probabilities are shown to be very low, using solid mathematics; then if you suffer from conjunction fallacy, it will look like this implies high safety of a design—while in fact it’s the detailedness of the scenario that makes probability so low.
Even if you realize that there may be many other scenarios that were not presented to you, you still have an incredibly low probability number on a highly plausible (“most likely”) failure scenario, which you, being unaware of the powers of conjunction, attribute to safety of the design.
The conjunction fallacy can be viewed as poor understanding of relation between plausibility and probability. Addition of extra details doesn’t make scenario seem less plausible (it can even increase plausibility), but does mathematically make it less probable.
Details:
What happens if a risk assessment is being prepared for (and possibly by) sufferers of conjunction fallacy?
Detailed example scenarios will be chosen, such as:
Then as a risk estimate, you multiply probability of Russian invasion of Poland, by probability of it resulting in suspension of diplomatic relations between US and SU, and multiply by probability of it happening specifically in 1983 . The resulting probability could be extremely small for sufficiently detailed scenario (you can add the polish prime minister being assassinated if your probability is still too high for comfort).
To a sufferer of conjunction fallacy it looks like a very plausible, ‘most likely’ scenario has been shown highly improbable, and thus the risks are low. The sufferer of conjunction fallacy does not expect that this probability could be very low in unsafe design.
It seems to me that the risk assessment is routinely done in such a fashion. Consider Space Shuttle’s reliability, or the NRC cost-benefit analyses for the spent fuel pools , which goes as low as one in 45 millions years for the most severe scenario. (Same seem to happen in all of the NRC resolutions, to varying extent; feel free to dig through)
Those reports looked outright insane to me—a very small number of highly detailed scenarios are shown to be extremely improbable—how in the world would anyone think that this implies safety? How in the world can anyone take seriously one in 45 million years scenario? That’s near the point where a meteorite impact leads to social disorder that leads to the fuel pool running dry!
I couldn’t understand that. Detailed scenarios are inherently unlikely to happen whenever the design is safe or not; their unlikehood is a property of their detailedness, not of safety or unsafety of design.
Until it clicked that if you read those through the goggles of conjunction fallacy, it is what looks like the most likely failure modes that are shown to be incredibly improbable. Previously (before reading lesswrong) I didn’t really understand how exactly anyone buys into this sort of stuff, and could find no way to even argue. You can’t quite talk someone out of something when you don’t understand how they believe in it. You say “there may be many scenarios that were not considered”, and they know that already.
This is one seriously dangerous way in which conjunction fallacy can be exploited. It seems to be rather common in risk analysis.
Note: I do think that the conjunction fallacy is responsible for much of the credibility given to such risk estimates; no-one seem to seriously believe that NRC always covers all the possible scenarios, yet at same time there seem to be a significant misunderstanding of the magnitude of the problem; the NRC risk estimates are taken as within the ballpark of the correct value in the cost-benefit analysis for the safety features. For nuclear power, widespread promotion of results of such analyses results in massive loss of public trust once an accident happens, and consequently to narrowing of available options and transition to less desirable energy sources (coal in particular), which in itself is a massive dis-utility.
[The other issue in linked NRC study is of course that the cost-benefit analysis had used internal probability when it should have used external probability.]
edit: minor clarifying
edits: improved the abstract and clarified the article further based on the comments.