Are we misreporting p(doom)s?
I usually say that my p(doom) is 50%, but that doesn’t mean the same thing that it does in a weather forecast.
In weather forecasts, the percentage states that they ran a series of simulations, and that percentage of simulations produced that result. A forecast of a 100% chance of rain, then, does not mean that there is near a 100% chance of rain. Forecasts still have error bars; 10 days out, a forecast will be wrong 50% of the time. Therefore, a 10 forecast of 100% chance of rain means that there is actually a 50%.
In my mental simulations, the outcome is bad 100% of the time. I can’t construct a convincing scenario in my mind where things work out, at least contingent on the continued development of AI. But I know that there is much that I don’t know, things I haven’t yet considered, etc. Hence the 50% error margin. But like in the weather forecast, this can be misinterpreted as me thinking that 50% of the time it works out.
Is there a terminology that currently accounts for this? If not, does it mean that p(doom)s are being misunderstood, or reported with different meanings?
Yes, thank you, I think that’s it exactly. I don’t think that people are communicating this well when they are reporting predictions.