Actually, I would love it if politicians and analysts used actual numbers. Then, we can check the accuracy of their forecasts. The hard part is getting them pinned down on some numbers. If we can get numeracy and probabilistic thinking into the political system, it seems that we would be much better off as it would allow us to gather real data. What do you think?
That may be desirable but it doesn’t make unrealistic numbers any more realistic. The US has made over 3,000 airstrikes in Syria, and ISIS does not have anywhere near 300,000 people with them. Of course, you said “report of an airstrike,” and not “airstrike,” but presumably most attacks have been reported at least in local media there.
What I said was report of an airstrike that killed civilians—not a simple airstrike. An airstrike killing civilians reported in prominent Muslim media is a great recruiting tool
Numeracy and consequence based thinking, sure. But as far as probability thinking goes, I quite disagree for roughly the reasons stated here*.
I tried to illustrate the following but let me try to make it more explicit. In using any sort of mathematical model there are a few steps: The first is to determine relevant parameters that you can try to assign numbers to (such as probability of events). The second is to create a model for how those parameters interact. The third is experiment with different inputs to see the different outcomes so you can optimize your model. Fourth you can make predictions with error bounds as determined by your model.
The first two steps are incredibly important, but I often see naive bayesians jumping straight to the third or even fourth step. But with complex systems subject to uncertainty and unknown factors, almost all of the work lies in the first and second steps. Getting back to one of my early comments if you construct a model in which only considers the ways in which an aggressive military intervention can increase the number of suicide bombings, then you’ll of course show that such an intervention will increase the number of suicide bombings. But this is modeling failure at the first step, by choosing only those parameters you’ve assumed your conclusion. It’s not the math that is the problem, it is lack of understanding of the phenomenon being described (and I think this is at the heart of many or even most instances of failed models).
*I know that post is long but I highly recommend reading at least section I on what probabilities are and section V on why/when you can use them.
I hear you about using mathematical modeling. However, I’m talking about quick, intuitive, System 1 probabilistic estimates here, more Fermi style than anything else. Remember, the goal is to convey to a broad audience that they can do probabilistic estimates, too. Let’s not let the perfect be the enemy of the good.
I’d prefer they first figure out the limits of their competence before starting to act on their Fermi estimates. And Tetlock’s sample is not quite general public.
Actually, I would love it if politicians and analysts used actual numbers. Then, we can check the accuracy of their forecasts. The hard part is getting them pinned down on some numbers. If we can get numeracy and probabilistic thinking into the political system, it seems that we would be much better off as it would allow us to gather real data. What do you think?
That may be desirable but it doesn’t make unrealistic numbers any more realistic. The US has made over 3,000 airstrikes in Syria, and ISIS does not have anywhere near 300,000 people with them. Of course, you said “report of an airstrike,” and not “airstrike,” but presumably most attacks have been reported at least in local media there.
What I said was report of an airstrike that killed civilians—not a simple airstrike. An airstrike killing civilians reported in prominent Muslim media is a great recruiting tool
Numeracy and consequence based thinking, sure. But as far as probability thinking goes, I quite disagree for roughly the reasons stated here*.
I tried to illustrate the following but let me try to make it more explicit. In using any sort of mathematical model there are a few steps: The first is to determine relevant parameters that you can try to assign numbers to (such as probability of events). The second is to create a model for how those parameters interact. The third is experiment with different inputs to see the different outcomes so you can optimize your model. Fourth you can make predictions with error bounds as determined by your model.
The first two steps are incredibly important, but I often see naive bayesians jumping straight to the third or even fourth step. But with complex systems subject to uncertainty and unknown factors, almost all of the work lies in the first and second steps. Getting back to one of my early comments if you construct a model in which only considers the ways in which an aggressive military intervention can increase the number of suicide bombings, then you’ll of course show that such an intervention will increase the number of suicide bombings. But this is modeling failure at the first step, by choosing only those parameters you’ve assumed your conclusion. It’s not the math that is the problem, it is lack of understanding of the phenomenon being described (and I think this is at the heart of many or even most instances of failed models).
*I know that post is long but I highly recommend reading at least section I on what probabilities are and section V on why/when you can use them.
I hear you about using mathematical modeling. However, I’m talking about quick, intuitive, System 1 probabilistic estimates here, more Fermi style than anything else. Remember, the goal is to convey to a broad audience that they can do probabilistic estimates, too. Let’s not let the perfect be the enemy of the good.
But can they do good probabilistic estimates? A little knowledge is a dangerous thing...
I’d prefer they do some than not at all. Then, they would improve over time, as research by Tetlock and others shows.
I’d prefer they first figure out the limits of their competence before starting to act on their Fermi estimates. And Tetlock’s sample is not quite general public.
I guess we have different preferences. I’d rather that people experiment and learn by doing.
Whether experimenting is a good thing depends on the cost of failure.
Also, on whether you can distinguish success from failure.