Norman Rasmussen’s analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident in ways that that previous experts had not (see McGrayne 2011, p. 180).
Is there any way that a policy maker could have known in advance to pay attention to Rasmussen rather than other experts? Is this a case of retroactively selecting the predictor who happened to be right out of a large group of varied, but roughly equally justified, predictors, or did Rasmussen use systematically better methods for making his predictions?
I mean a safety mechanism like a button that shuts down the assembly line. If someone gets caught in the machinery and you push the button to prevent them from getting (more) hurt, you will be happy the expert told you to install that button.
All else being equal, I would put more trust in the report that uses Bayesian statistics than a report that uses Frequentist statistics, but I wouldn’t expect that strong an effect from that alone. (I would expect a strong increase in accuracy for using any kind of statistics over intuition.)
Following your link, I notice that Rasmussen’s report used a fault tree. I would expect that the consideration of failure modes of each component of a nuclear reactor played a huge role in his accuracy, and that Bayesian and Frequentist statistics would largely agree how to get individual failure rates from historical data and how to synthesize this information into a failure rate for the whole reactor. Assuming the other experts did not also use fault trees, I would credit the fault trees more than Bayes for Rasmussen’s success. (And if they did, I would wonder where they went wrong.)
Is there any way that a policy maker could have known in advance to pay attention to Rasmussen rather than other experts? Is this a case of retroactively selecting the predictor who happened to be right out of a large group of varied, but roughly equally justified, predictors, or did Rasmussen use systematically better methods for making his predictions?
It’s worth noting that stories of catastrophes that were successfully averted because someone listened to an expert may be hard to find.
If an expert tells you to add a safety mechanism, and you end up using that mechanism, you know that the expert helped you.
Right, but the story won’t be written up, or will be harder to find.
Or the expert caused you to waste money on a needless safety mechanism.
I mean a safety mechanism like a button that shuts down the assembly line. If someone gets caught in the machinery and you push the button to prevent them from getting (more) hurt, you will be happy the expert told you to install that button.
Aha. I was reading “use” as “install”, not “activate during emergency”. I agree.
Yes. Rasmussen used Bayes, while everyone else used the methods of (1) Frequentism or (2) Experts Must Have Great Intuitions.
All else being equal, I would put more trust in the report that uses Bayesian statistics than a report that uses Frequentist statistics, but I wouldn’t expect that strong an effect from that alone. (I would expect a strong increase in accuracy for using any kind of statistics over intuition.)
Following your link, I notice that Rasmussen’s report used a fault tree. I would expect that the consideration of failure modes of each component of a nuclear reactor played a huge role in his accuracy, and that Bayesian and Frequentist statistics would largely agree how to get individual failure rates from historical data and how to synthesize this information into a failure rate for the whole reactor. Assuming the other experts did not also use fault trees, I would credit the fault trees more than Bayes for Rasmussen’s success. (And if they did, I would wonder where they went wrong.)
This is not a convincing argument to a policy maker.
Definitely not!