I think it was probably neglected due to being an African illness
You mean, because:
Ebola did not crash the stock market...
a terrifying case fatality rate, but a transmissibility rate too low to make it a world-shaker. Most of its fatalities were in nations without much economic clout. It’s relatively easy for a well-equipped nation to prevent its spread.
...it was far away*, low chance of impact, and easily handled if it did get over here?
*Perhaps coronavirus was neglected due to this as well. Accurate, and accurate for good reasons may not always align at the tails. More concretely: A way to ‘ignore all diseases that won’t be important’ is to ‘ignore all diseases.’
A self-limiting horror:
Ebola also might suffer from too high a fatality rate, too fast, limiting it’s spread.
Confounding:
Factor 4: Institutional response
Has a city-wide lockdown been attempted in a city of a population of at least 10 million, or have travel restrictions to or from major economies been implemented?
This proxy may capture useful information (‘if they’re worried, you should be too’), but risks being brittle in that regard. Consider:
1. a prediction market believes that in a month these measures will be put into practice. Should you act on this prediction?
2. An outbreak appears identical to an earlier case in which such measures were implemented. But the response hasn’t occurred (in the same timeframe). How should one respond?
In other words, a measure that says “it’s right to panic when other people panic” may be right when other people panic, but conditioning the action ‘we panic’ on ‘other people panic’ might be a bad move if we want to be able to respond more quickly than said institutions/the stock market.
If a historically correctly calibrated prediction market, with massive numbers of predictions already made and a reliable vetting process, gives 99:1 odds there’ll be a shutdown in a month, then yes, you should probably act as if there’ll be a shutdown. But there’ll always be a difference in certainty between “predicted to occur” and “has occurred.”
If a new outbreak occurs identical to a past outbreak but no shutdown has occurred in the same timeframe, you should not mark this criteria as met, and you should try to find out what’s changed in the world to prevent a shutdown.
A shutdown is partly a response to other conditions, but I suggest that a shutdown in one place lowers the bar for future shutdowns by normalizing the behavior. A global wave of shutdowns predict a slowed economy, which predicts a crashing stock market.
If a new outbreak occurs identical to a past outbreak but no shutdown has occurred in the same timeframe, you should not mark this criteria as met, and you should try to find out what’s changed in the world to prevent a shutdown.
Yes. But what’s changed isn’t necessarily risk, just perception:
A shutdown is partly a response to other conditions, but I suggest that a shutdown in one place lowers the bar for future shutdowns by normalizing the behavior.
The above may be true relative to time as well as distance. I.E., if as time goes on, health efforts are more successful, then measures like shutdowns might not be used:
because they’re not normal, even if they “should be”. A cascade of shutdowns might indicate that the time to for such things had past, but no one wanted to be the first to do it.
unless a higher standard is met. But when the “alarm bell” should be wrung should not change in such a fashion.*
*I’d add the caveat “Unless changes have occurred which affect what it predicts.” but recent events seem to suggest that such ‘updates’ are more likely from errors, and should only be pushed through once they have been tested by reality at least once.
Things like “People are slower to use measures when it becomes apparent they should be because it’s been long enough such measures aren’t normal” if anything should make one more inclined to ring the bell earlier, rather than later.
Contents:
Replace the symbol with...
A self-limiting horror
Confounding
Comment:
Replace the symbol with...:
You mean, because:
...it was far away*, low chance of impact, and easily handled if it did get over here?
*Perhaps coronavirus was neglected due to this as well. Accurate, and accurate for good reasons may not always align at the tails. More concretely: A way to ‘ignore all diseases that won’t be important’ is to ‘ignore all diseases.’
A self-limiting horror:
Ebola also might suffer from too high a fatality rate, too fast, limiting it’s spread.
Confounding:
Factor 4: Institutional response
Has a city-wide lockdown been attempted in a city of a population of at least 10 million, or have travel restrictions to or from major economies been implemented?
This proxy may capture useful information (‘if they’re worried, you should be too’), but risks being brittle in that regard. Consider:
1. a prediction market believes that in a month these measures will be put into practice. Should you act on this prediction?
2. An outbreak appears identical to an earlier case in which such measures were implemented. But the response hasn’t occurred (in the same timeframe). How should one respond?
In other words, a measure that says “it’s right to panic when other people panic” may be right when other people panic, but conditioning the action ‘we panic’ on ‘other people panic’ might be a bad move if we want to be able to respond more quickly than said institutions/the stock market.
Responding to your last question:
If a historically correctly calibrated prediction market, with massive numbers of predictions already made and a reliable vetting process, gives 99:1 odds there’ll be a shutdown in a month, then yes, you should probably act as if there’ll be a shutdown. But there’ll always be a difference in certainty between “predicted to occur” and “has occurred.”
If a new outbreak occurs identical to a past outbreak but no shutdown has occurred in the same timeframe, you should not mark this criteria as met, and you should try to find out what’s changed in the world to prevent a shutdown.
A shutdown is partly a response to other conditions, but I suggest that a shutdown in one place lowers the bar for future shutdowns by normalizing the behavior. A global wave of shutdowns predict a slowed economy, which predicts a crashing stock market.
Yes. But what’s changed isn’t necessarily risk, just perception:
The above may be true relative to time as well as distance. I.E., if as time goes on, health efforts are more successful, then measures like shutdowns might not be used:
because they’re not normal, even if they “should be”. A cascade of shutdowns might indicate that the time to for such things had past, but no one wanted to be the first to do it.
unless a higher standard is met. But when the “alarm bell” should be wrung should not change in such a fashion.*
*I’d add the caveat “Unless changes have occurred which affect what it predicts.” but recent events seem to suggest that such ‘updates’ are more likely from errors, and should only be pushed through once they have been tested by reality at least once.
Things like “People are slower to use measures when it becomes apparent they should be because it’s been long enough such measures aren’t normal” if anything should make one more inclined to ring the bell earlier, rather than later.