I listened to The Failure of Risk Management by Douglas Hubbard, a book that vigorously criticizes qualitative risk management approaches (like the use of risk matrices), and praises a rationalist-friendly quantitative approach. Here are 4 takeaways from that book:
There are very different approaches to risk estimation that are often unaware of each other: you can do risk estimations like an actuary (relying on statistics, reference class arguments, and some causal models), like an engineer (relying mostly on causal models and simulations), like a trader (relying only on statistics, with no causal model), or like a consultant (usually with shitty qualitative approaches).
The state of risk estimation for insurances is actually pretty good: it’s quantitative, and there are strong professional norms around different kinds of malpractice. When actuaries tank a company because they ignored tail outcomes, they are at risk of losing their license.
The state of risk estimation in consulting and management is quite bad: most risk management is done with qualitative methods which have no positive evidence of working better than just relying on intuition alone, and qualitative approaches (like risk matrices) have weird artifacts:
Fuzzy labels (e.g. “likely”, “important”, …) create illusions of clear communication. Just defining the fuzzy categories doesn’t fully alleviate that (when you ask people to say what probabilities each box corresponds to, they often fail to look at the definition of categories).
Inconsistent qualitative methods make cross-team communication much harder.
Coarse categories mean that you introduce weird threshold effects that sometimes encourage ignoring tail effects and make the analysis of past decisions less reliable.
When choosing between categories, people are susceptible to irrelevant alternatives (e.g. if you split the “5/5 importance (loss > $1M)” category into “5/5 ($1-10M), 5⁄6 ($10-100M), 5⁄7 (>$100M)”, people answer a fixed “1/5 (<10k)” category less often).
Following a qualitative method can increase confidence and satisfaction, even in cases where it doesn’t increase accuracy (there is an “analysis placebo effect”).
Qualitative methods don’t prompt their users to either seek empirical evidence to inform their choices.
Qualitative methods don’t prompt their users to measure their risk estimation track record.
Using quantitative risk estimation is tractable and not that weird. There is a decent track record of people trying to estimate very-hard-to-estimate things, and a vocal enough opposition to qualitative methods that they are slowly getting pulled back from risk estimation standards. This makes me much less sympathetic to the absence of quantitative risk estimation at AI labs.
A big part of the book is an introduction to rationalist-type risk estimation (estimating various probabilities and impact, aggregating them with Monte-Carlo, rejecting Knightian uncertainty, doing calibration training and predictions markets, starting from a reference class and updating with Bayes). He also introduces some rationalist ideas in parallel while arguing for his thesis (e.g. isolated demands for rigor). It’s the best legible and “serious” introduction to classic rationalist ideas I know of.
The book also contains advice if you are trying to push for quantitative risk estimates in your team / company, and a very pleasant and accurate dunk on Nassim Taleb (and in particular his claims about models being bad, without a good justification for why reasoning without models is better).
Overall, I think the case against qualitative methods and for quantitative ones is somewhat strong, but it’s far from being a slam dunk because there is no evidence of some methods being worse than others in terms of actual business outputs. The author also fails to acknowledge and provide conclusive evidence against the possibility that people may have good qualitative intuitions about risk even if they fail to translate these intuitions into numbers that make any sense (your intuition sometimes does the right estimation and math even when you suck at doing the estimation and math explicitly).
I also listened to How to Measure Anything in Cybersecurity Risk 2nd Edition by the same author. I had a huge amount of overlapping content with The Failure of Risk Management (and the non-overlapping parts were quite dry), but I still learned a few things:
Executives of big companies now care a lot about cybersecurity (e.g. citing it as one of the main threats they have to face), which wasn’t true in ~2010.
Evaluation of cybersecurity risk is not at all synonyms with red teaming. This book is entirely about risk assessment in cyber and doesn’t speak about red teaming at all. Rather, it focuses on reference class forecasting, comparison with other incidents in the industry, trying to estimate the damages if there is a breach, … It only captures information from red teaming indirectly via expert interviews.
I’d like to find a good resource that explains how red teaming (including intrusion tests, bug bounties, …) can fit into a quantitative risk assessment.
By Knightian uncertainty, I mean “the lack of any quantifiable knowledge about some possible occurrence” i.e. you can’t put a probability on it (Wikipedia).
The TL;DR is that Knightian uncertainty is not a useful concept to make decisions, while the use subjective probabilities is: if you are calibrated (which you can be trained to become), then you will be better off taking different decisions on p=1% “Knightian uncertain events” and p=10% “Knightian uncertain events”.
For a more in-depth defense of this position in the context of long-term predictions, where it’s harder to know if calibration training obviously works, see the latest scott alexander post.
If you want to get the show-off nerds really on board, then you could make a poast about the expected value of multiplying several distributions (maybe normal distr or pareto distr). Most people get this wrong! I still don’t know how to do it right lol. After I read it I can dunk on my friends and thereby spread the word.
For the product of random variables, there are close form solutions for some common distributions, but I guess Monte-Carlo simulations are all you need in practice (+ with Monte-Carlo can always have the whole distribution, not just the expected value).
Quick convenient monte carlo sim UI seems tractable & neglected & impactful. Like you could reply to a tweet with “hello you are talking about an X=A*B*C thing here. Here’s a histogram of X for your implied distributions of A,B,C” or whatever.
I listened to The Failure of Risk Management by Douglas Hubbard, a book that vigorously criticizes qualitative risk management approaches (like the use of risk matrices), and praises a rationalist-friendly quantitative approach. Here are 4 takeaways from that book:
There are very different approaches to risk estimation that are often unaware of each other: you can do risk estimations like an actuary (relying on statistics, reference class arguments, and some causal models), like an engineer (relying mostly on causal models and simulations), like a trader (relying only on statistics, with no causal model), or like a consultant (usually with shitty qualitative approaches).
The state of risk estimation for insurances is actually pretty good: it’s quantitative, and there are strong professional norms around different kinds of malpractice. When actuaries tank a company because they ignored tail outcomes, they are at risk of losing their license.
The state of risk estimation in consulting and management is quite bad: most risk management is done with qualitative methods which have no positive evidence of working better than just relying on intuition alone, and qualitative approaches (like risk matrices) have weird artifacts:
Fuzzy labels (e.g. “likely”, “important”, …) create illusions of clear communication. Just defining the fuzzy categories doesn’t fully alleviate that (when you ask people to say what probabilities each box corresponds to, they often fail to look at the definition of categories).
Inconsistent qualitative methods make cross-team communication much harder.
Coarse categories mean that you introduce weird threshold effects that sometimes encourage ignoring tail effects and make the analysis of past decisions less reliable.
When choosing between categories, people are susceptible to irrelevant alternatives (e.g. if you split the “5/5 importance (loss > $1M)” category into “5/5 ($1-10M), 5⁄6 ($10-100M), 5⁄7 (>$100M)”, people answer a fixed “1/5 (<10k)” category less often).
Following a qualitative method can increase confidence and satisfaction, even in cases where it doesn’t increase accuracy (there is an “analysis placebo effect”).
Qualitative methods don’t prompt their users to either seek empirical evidence to inform their choices.
Qualitative methods don’t prompt their users to measure their risk estimation track record.
Using quantitative risk estimation is tractable and not that weird. There is a decent track record of people trying to estimate very-hard-to-estimate things, and a vocal enough opposition to qualitative methods that they are slowly getting pulled back from risk estimation standards. This makes me much less sympathetic to the absence of quantitative risk estimation at AI labs.
A big part of the book is an introduction to rationalist-type risk estimation (estimating various probabilities and impact, aggregating them with Monte-Carlo, rejecting Knightian uncertainty, doing calibration training and predictions markets, starting from a reference class and updating with Bayes). He also introduces some rationalist ideas in parallel while arguing for his thesis (e.g. isolated demands for rigor). It’s the best legible and “serious” introduction to classic rationalist ideas I know of.
The book also contains advice if you are trying to push for quantitative risk estimates in your team / company, and a very pleasant and accurate dunk on Nassim Taleb (and in particular his claims about models being bad, without a good justification for why reasoning without models is better).
Overall, I think the case against qualitative methods and for quantitative ones is somewhat strong, but it’s far from being a slam dunk because there is no evidence of some methods being worse than others in terms of actual business outputs. The author also fails to acknowledge and provide conclusive evidence against the possibility that people may have good qualitative intuitions about risk even if they fail to translate these intuitions into numbers that make any sense (your intuition sometimes does the right estimation and math even when you suck at doing the estimation and math explicitly).
I also listened to How to Measure Anything in Cybersecurity Risk 2nd Edition by the same author. I had a huge amount of overlapping content with The Failure of Risk Management (and the non-overlapping parts were quite dry), but I still learned a few things:
Executives of big companies now care a lot about cybersecurity (e.g. citing it as one of the main threats they have to face), which wasn’t true in ~2010.
Evaluation of cybersecurity risk is not at all synonyms with red teaming. This book is entirely about risk assessment in cyber and doesn’t speak about red teaming at all. Rather, it focuses on reference class forecasting, comparison with other incidents in the industry, trying to estimate the damages if there is a breach, … It only captures information from red teaming indirectly via expert interviews.
I’d like to find a good resource that explains how red teaming (including intrusion tests, bug bounties, …) can fit into a quantitative risk assessment.
Is there a short summary on the rejecting Knightian uncertainty bit?
By Knightian uncertainty, I mean “the lack of any quantifiable knowledge about some possible occurrence” i.e. you can’t put a probability on it (Wikipedia).
The TL;DR is that Knightian uncertainty is not a useful concept to make decisions, while the use subjective probabilities is: if you are calibrated (which you can be trained to become), then you will be better off taking different decisions on p=1% “Knightian uncertain events” and p=10% “Knightian uncertain events”.
For a more in-depth defense of this position in the context of long-term predictions, where it’s harder to know if calibration training obviously works, see the latest scott alexander post.
If you want to get the show-off nerds really on board, then you could make a poast about the expected value of multiplying several distributions (maybe normal distr or pareto distr). Most people get this wrong! I still don’t know how to do it right lol. After I read it I can dunk on my friends and thereby spread the word.
For the product of random variables, there are close form solutions for some common distributions, but I guess Monte-Carlo simulations are all you need in practice (+ with Monte-Carlo can always have the whole distribution, not just the expected value).
Quick convenient monte carlo sim UI seems tractable & neglected & impactful. Like you could reply to a tweet with “hello you are talking about an
X=A*B*C
thing here. Here’s a histogram of X for your implied distributions of A,B,C” or whatever.Both causal.app and getguesstimate.com have pretty good monte carlo uis
Oh sweet