Note that the conclusion of the toy model rests not on “we did the 9-dimensional integral and got a very low number” but “we did Monte Carlo sampling and ended up with 21%”—it seems possible that this might not have been doable 30 years ago, but perhaps it was 20 years ago. (Not Monte Carlo sampling at all—that’s as old as Fermi—but being able to do this sort of numerical integration sufficiently cheaply.)
Also, the central intuition guiding the alternative approach is that the expectation of a product is the product of the expectation, which is actually true. The thing that’s going on here is elaborating on the generator of P(ETI=0) in a way that’s different from “well, we just use a binomial with the middle-of-the-pack rate, right?”. This sort of hierarchical modeling of parameter uncertainties is still fairly rare, even among professional statisticians today, and so it’s not a huge surprise to me that the same is true for people here. [To be clear, the alternative is picking the MLE model and using only in-model uncertainty, which seems to be standard practice from what I’ve seen. Most of the methods that bake in the model uncertainty are so-called “model free” methods.]
Note that the conclusion of the toy model rests not on “we did the 9-dimensional integral and got a very low number” but “we did Monte Carlo sampling and ended up with 21%“—it seems possible that this might not have been doable 30 years ago, but perhaps it was 20 years ago. (Not Monte Carlo sampling at all—that’s as old as Fermi—but being able to do this sort of numerical integration sufficiently cheaply.)
I’m quite sure doing this is really cheap even with hardware available 30 years ago. Taking a single sample just requires sampling 6 uniform values and 1 normal value, adding these, and checking whether this is less than a constant. Even with 1988 hardware, it should be possible to do this >100 times per second on a standard personal computer. And you only need tens of thousands of samples to get a probability estimate that is almost certainly accurate to within 1%.
Note that the conclusion of the toy model rests not on “we did the 9-dimensional integral and got a very low number” but “we did Monte Carlo sampling and ended up with 21%”—it seems possible that this might not have been doable 30 years ago, but perhaps it was 20 years ago. (Not Monte Carlo sampling at all—that’s as old as Fermi—but being able to do this sort of numerical integration sufficiently cheaply.)
Also, the central intuition guiding the alternative approach is that the expectation of a product is the product of the expectation, which is actually true. The thing that’s going on here is elaborating on the generator of P(ETI=0) in a way that’s different from “well, we just use a binomial with the middle-of-the-pack rate, right?”. This sort of hierarchical modeling of parameter uncertainties is still fairly rare, even among professional statisticians today, and so it’s not a huge surprise to me that the same is true for people here. [To be clear, the alternative is picking the MLE model and using only in-model uncertainty, which seems to be standard practice from what I’ve seen. Most of the methods that bake in the model uncertainty are so-called “model free” methods.]
I’m quite sure doing this is really cheap even with hardware available 30 years ago. Taking a single sample just requires sampling 6 uniform values and 1 normal value, adding these, and checking whether this is less than a constant. Even with 1988 hardware, it should be possible to do this >100 times per second on a standard personal computer. And you only need tens of thousands of samples to get a probability estimate that is almost certainly accurate to within 1%.