Numeracy and consequence based thinking, sure. But as far as probability thinking goes, I quite disagree for roughly the reasons stated here*.
I tried to illustrate the following but let me try to make it more explicit. In using any sort of mathematical model there are a few steps: The first is to determine relevant parameters that you can try to assign numbers to (such as probability of events). The second is to create a model for how those parameters interact. The third is experiment with different inputs to see the different outcomes so you can optimize your model. Fourth you can make predictions with error bounds as determined by your model.
The first two steps are incredibly important, but I often see naive bayesians jumping straight to the third or even fourth step. But with complex systems subject to uncertainty and unknown factors, almost all of the work lies in the first and second steps. Getting back to one of my early comments if you construct a model in which only considers the ways in which an aggressive military intervention can increase the number of suicide bombings, then you’ll of course show that such an intervention will increase the number of suicide bombings. But this is modeling failure at the first step, by choosing only those parameters you’ve assumed your conclusion. It’s not the math that is the problem, it is lack of understanding of the phenomenon being described (and I think this is at the heart of many or even most instances of failed models).
*I know that post is long but I highly recommend reading at least section I on what probabilities are and section V on why/when you can use them.
I hear you about using mathematical modeling. However, I’m talking about quick, intuitive, System 1 probabilistic estimates here, more Fermi style than anything else. Remember, the goal is to convey to a broad audience that they can do probabilistic estimates, too. Let’s not let the perfect be the enemy of the good.
I’d prefer they first figure out the limits of their competence before starting to act on their Fermi estimates. And Tetlock’s sample is not quite general public.
Numeracy and consequence based thinking, sure. But as far as probability thinking goes, I quite disagree for roughly the reasons stated here*.
I tried to illustrate the following but let me try to make it more explicit. In using any sort of mathematical model there are a few steps: The first is to determine relevant parameters that you can try to assign numbers to (such as probability of events). The second is to create a model for how those parameters interact. The third is experiment with different inputs to see the different outcomes so you can optimize your model. Fourth you can make predictions with error bounds as determined by your model.
The first two steps are incredibly important, but I often see naive bayesians jumping straight to the third or even fourth step. But with complex systems subject to uncertainty and unknown factors, almost all of the work lies in the first and second steps. Getting back to one of my early comments if you construct a model in which only considers the ways in which an aggressive military intervention can increase the number of suicide bombings, then you’ll of course show that such an intervention will increase the number of suicide bombings. But this is modeling failure at the first step, by choosing only those parameters you’ve assumed your conclusion. It’s not the math that is the problem, it is lack of understanding of the phenomenon being described (and I think this is at the heart of many or even most instances of failed models).
*I know that post is long but I highly recommend reading at least section I on what probabilities are and section V on why/when you can use them.
I hear you about using mathematical modeling. However, I’m talking about quick, intuitive, System 1 probabilistic estimates here, more Fermi style than anything else. Remember, the goal is to convey to a broad audience that they can do probabilistic estimates, too. Let’s not let the perfect be the enemy of the good.
But can they do good probabilistic estimates? A little knowledge is a dangerous thing...
I’d prefer they do some than not at all. Then, they would improve over time, as research by Tetlock and others shows.
I’d prefer they first figure out the limits of their competence before starting to act on their Fermi estimates. And Tetlock’s sample is not quite general public.
I guess we have different preferences. I’d rather that people experiment and learn by doing.
Whether experimenting is a good thing depends on the cost of failure.
Also, on whether you can distinguish success from failure.