This may be called something like the “low-probability forecaster reputation problem”.
People in the village were not necessarily wrong ex-ante to dismiss the warnings. They of course should not update much about forecasting abilities on not seeing a wolf after low-probability warnings. But they cannot know the accuracy of the forecasts. And if someone is an expert in certain low-probability events and nothing else, you cannot judge ability a lot based on the few forecasts. You need to examine the models used for forecasting, but what tells you that is a good use of your time? There are crackpots everywhere.
Therefore, people try to get bets or forecasts for things that are implied by the lp forecaster’s model but happen more often and earlier. If your model implies big grey wolves for tonight with 15% chance, should we already see dead sheep by noon? If your model implies war between states x and y next year, what should we see next month that would not be expected by people with different models?
If you really specialize in forecasting low-probability events only and cannot show any evidence to show your model is right, then that is a tragedy. But it is understandable that people do not update based on your forecast.
This may be called something like the “low-probability forecaster reputation problem”.
People in the village were not necessarily wrong ex-ante to dismiss the warnings. They of course should not update much about forecasting abilities on not seeing a wolf after low-probability warnings. But they cannot know the accuracy of the forecasts. And if someone is an expert in certain low-probability events and nothing else, you cannot judge ability a lot based on the few forecasts. You need to examine the models used for forecasting, but what tells you that is a good use of your time? There are crackpots everywhere.
Therefore, people try to get bets or forecasts for things that are implied by the lp forecaster’s model but happen more often and earlier. If your model implies big grey wolves for tonight with 15% chance, should we already see dead sheep by noon? If your model implies war between states x and y next year, what should we see next month that would not be expected by people with different models?
If you really specialize in forecasting low-probability events only and cannot show any evidence to show your model is right, then that is a tragedy. But it is understandable that people do not update based on your forecast.