I don’t know about SARS, but in the case of H1N1 it wasn’t “crying wolf” so much as being prepared for a potential pandemic which didn’t happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn’t become as virulent as expected doesn’t mean that preparing for that eventuality was a waste of time.
Obviously the crux of the issue is whether the official probability estimates and predictions for these types of threats are accurate or not. It’s difficult to judge this in any individual case that fails to develop into a serious problem but if you can observe a consistent ongoing pattern of dire predictions that do not pan out this is evidence of an underlying bias in the estimates of risk. Preparing for an eventuality as if it had a 10% probability of happening when the true risk is 1% will lead to serious mis-allocation of resources.
It looks to me like there is a consistent pattern of overstating the risks of various catastrophes. Rigorously proving this is difficult. I’ve pointed to some examples of what look like over-confident predictions of disaster (there’s lots more in The Rational Optimist). I’m not sure we can easily resolve any remaining disagreement on the extent of risk exaggeration however.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how difficult it is to predict biological systems, I think it makes sense to treat the arrival of a new flu subtype with concern and for governments to set up contingency programmes. That’s not to say that the media didn’t hype swine flu and bird flu, but that doesn’t mean that the government preparations were an overreaction.
That’s not to say that some threats aren’t exaggerated, and others (low-probability, global threats like asteroid strikes or big volcanic eruptions) don’t get enough attention.
I wouldn’t put much trust in Matt Ridley’s abilities to estimate risk:
Mr Ridley told the Treasury Select Committee on Tuesday, that the bank had been hit by “wholly unexpected” events and he defended the way he and his colleagues had been running the bank.
“We were subject to a completely unprecedented and unpredictable closure of the world credit markets,” he said.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Well obviously. I refer you to my previous comment. At this point our remaining disagreement on this issue is unlikely to be resolved without better data. Continuing to go back and forth repeating that I think there is a pattern of overestimation for certain types of risk and that you think the estimates are accurate is not going to resolve the question.
Maybe at first, but I clearly recall that the hype was still ongoing even after it was known that this was a milder flu-version than usual.
And the reactions were not well designed to handle the flu either. One example is that my university installed hand sanitizers, well, pretty much everywhere. But the flu is primarily transmitted not from hand-to-hand contact, but by miniature droplets when people cough, sneeze, or just talk and breathe:
Spread of the 2009 H1N1 virus is thought to occur in the same way that seasonal flu spreads. Flu viruses are spread mainly from person to person through coughing, sneezing or talking by people with influenza. Sometimes people may become infected by touching something – such as a surface or object – with flu viruses on it and then touching their mouth or nose.
Wikipedia takes a more middle-of-the-road view, noting that it’s not entirely clear how much transmission happens in which route, but still:
The length of time the virus will persist on a surface varies, with the virus surviving for one to two days on hard, non-porous surfaces such as plastic or metal, for about fifteen minutes from dry paper tissues, and only five minutes on skin.
Which really suggests to me that hand-washing (or sanitizing) just isn’t going to be terribly effective. The best preventative is making sick people stay home.
Now, regular hand-washing is a great prophylactic for many other disease pathways, of course. But not for what the supposed purpose was.
I interpret what happened with H1N1 a little differently. Before it was known how serious it would be, the media started covering it. Now even given that H1N1 was relatively harmless, it is quite likely that similar but non-harmless diseases will appear in the future, so having containment strategies and knowing what works is important. By making H1N1 sound scary, they gave countries and health organizations an incentive to test their strategies with lower consequences for failure than there would be if they had to test them on something more lethal. The reactins make a lot more sense if you look at it as a large-scale training exercise. If people knew that it was harmless, they would’ve behaved differently and lowered the validity of the test..
It isn’t fully general; it only applies when the expected benefits (from lessons learned) exceed the costs of that particular kind of drill, and there’s no cheaper way to learn the same lessons.
Are you claiming that this was actually the plan all along? That our infinitely wise and benevolent leaders decided to create a panic irrespective of the actual threat posed by H1N1 for the purposes of a realistic training exercise?
If this is not what you are suggesting are you saying that although in fact this panic was an example of general government incompetence in the field of risk management it purely coincidentally turned out to be exactly the optimal thing to do in retrospect?
I have no evidence that would let me distinguish between these two scenarios. I also note that there’s plenty of middle ground—for example, private media companies could’ve decided to create an unjustified panic for ratings, while the governments and hospitals decided to make the best of it. Or more likely, the panic developed without anyone influential making a conscious decision to promote or suppress it either way.
Just because some institutions over-reacted or implemented ineffective measures, doesn’t mean that the concern wasn’t proportionate or that effective measures weren’t also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed (“Catch it, bin it, kill it”).
If anything, the government reaction was insufficient, because the phone system was delayed and the Tamiflu stockpiles were limited (although Tamiflu is apparently pretty marginal anyway, so making infected people stay at home was more important).
The media may have carried on hyping the threat after it turned out not to be so severe. They also ran stories complaining that the threat had been overhyped and the effort wasted. Just because the media or university administrators say stupid things about something, that doesn’t mean it’s not real.
I don’t know about SARS, but in the case of H1N1 it wasn’t “crying wolf” so much as being prepared for a potential pandemic which didn’t happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn’t become as virulent as expected doesn’t mean that preparing for that eventuality was a waste of time.
Obviously the crux of the issue is whether the official probability estimates and predictions for these types of threats are accurate or not. It’s difficult to judge this in any individual case that fails to develop into a serious problem but if you can observe a consistent ongoing pattern of dire predictions that do not pan out this is evidence of an underlying bias in the estimates of risk. Preparing for an eventuality as if it had a 10% probability of happening when the true risk is 1% will lead to serious mis-allocation of resources.
It looks to me like there is a consistent pattern of overstating the risks of various catastrophes. Rigorously proving this is difficult. I’ve pointed to some examples of what look like over-confident predictions of disaster (there’s lots more in The Rational Optimist). I’m not sure we can easily resolve any remaining disagreement on the extent of risk exaggeration however.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how difficult it is to predict biological systems, I think it makes sense to treat the arrival of a new flu subtype with concern and for governments to set up contingency programmes. That’s not to say that the media didn’t hype swine flu and bird flu, but that doesn’t mean that the government preparations were an overreaction.
That’s not to say that some threats aren’t exaggerated, and others (low-probability, global threats like asteroid strikes or big volcanic eruptions) don’t get enough attention.
I wouldn’t put much trust in Matt Ridley’s abilities to estimate risk:
http://news.bbc.co.uk/1/hi/7052828.stm (yes, it’s the same Matt Ridley)
Well obviously. I refer you to my previous comment. At this point our remaining disagreement on this issue is unlikely to be resolved without better data. Continuing to go back and forth repeating that I think there is a pattern of overestimation for certain types of risk and that you think the estimates are accurate is not going to resolve the question.
Maybe at first, but I clearly recall that the hype was still ongoing even after it was known that this was a milder flu-version than usual.
And the reactions were not well designed to handle the flu either. One example is that my university installed hand sanitizers, well, pretty much everywhere. But the flu is primarily transmitted not from hand-to-hand contact, but by miniature droplets when people cough, sneeze, or just talk and breathe:
http://www.cdc.gov/h1n1flu/qa.htm
Wikipedia takes a more middle-of-the-road view, noting that it’s not entirely clear how much transmission happens in which route, but still:
http://en.wikipedia.org/wiki/Influenza
Which really suggests to me that hand-washing (or sanitizing) just isn’t going to be terribly effective. The best preventative is making sick people stay home.
Now, regular hand-washing is a great prophylactic for many other disease pathways, of course. But not for what the supposed purpose was.
I interpret what happened with H1N1 a little differently. Before it was known how serious it would be, the media started covering it. Now even given that H1N1 was relatively harmless, it is quite likely that similar but non-harmless diseases will appear in the future, so having containment strategies and knowing what works is important. By making H1N1 sound scary, they gave countries and health organizations an incentive to test their strategies with lower consequences for failure than there would be if they had to test them on something more lethal. The reactins make a lot more sense if you look at it as a large-scale training exercise. If people knew that it was harmless, they would’ve behaved differently and lowered the validity of the test..
This looks like a fully general argument for panicking about anything.
It isn’t fully general; it only applies when the expected benefits (from lessons learned) exceed the costs of that particular kind of drill, and there’s no cheaper way to learn the same lessons.
Are you claiming that this was actually the plan all along? That our infinitely wise and benevolent leaders decided to create a panic irrespective of the actual threat posed by H1N1 for the purposes of a realistic training exercise?
If this is not what you are suggesting are you saying that although in fact this panic was an example of general government incompetence in the field of risk management it purely coincidentally turned out to be exactly the optimal thing to do in retrospect?
I have no evidence that would let me distinguish between these two scenarios. I also note that there’s plenty of middle ground—for example, private media companies could’ve decided to create an unjustified panic for ratings, while the governments and hospitals decided to make the best of it. Or more likely, the panic developed without anyone influential making a conscious decision to promote or suppress it either way.
Just because some institutions over-reacted or implemented ineffective measures, doesn’t mean that the concern wasn’t proportionate or that effective measures weren’t also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed (“Catch it, bin it, kill it”).
If anything, the government reaction was insufficient, because the phone system was delayed and the Tamiflu stockpiles were limited (although Tamiflu is apparently pretty marginal anyway, so making infected people stay at home was more important).
The media may have carried on hyping the threat after it turned out not to be so severe. They also ran stories complaining that the threat had been overhyped and the effort wasted. Just because the media or university administrators say stupid things about something, that doesn’t mean it’s not real.