Yeah, that provides some more examples. The elite was very worried about existential risks from nuclear war (“The Fate of the Earth”), resource shortages and mass starvation (“Club of Rome”), and technology-based totalitarianism (“1984”). Now, having been embarrassed by falling for too many cries of wolf (or at least, for worrying prematurely), they are wary of being burned again.
I don’t think worrying about nuclear war during the Cold War constituted either “crying wolf” or worrying prematurely. The Cuban Missile Crisis, the Able Archer 83 exercise (a year after “The Fate of the Earth” was published), and various false alert incidents could have resulted in nuclear war, and I’m not sure why anyone who opposed nuclear weapons at the time would be “embarrassed” in the light of what we now know.
I don’t think an existential risk has to be a certainty for it to be worth taking seriously.
In the US, concerns about some technology risks like EMP attacks and nuclear terrorism are still taken seriously, even though these are probably unlikely to happen and the damage would be much less severe than a nuclear war.
I don’t think an existential risk has to be a certainty for it to be worth taking seriously.
I agree. And nuclear war was certainly a risk that was worth taking seriously at the time.
However, that doesn’t make my last sentence any less true, especially if you replace “embarrassed” with “exhausted”. The risk of a nuclear war, somewhere, some time within the next 100 years, is still high—more likely than not, I would guess. It probably won’t destroy the human race, or even modern technology, but it could easily cost 400 million human lives. Yet, in part because people have become tired of worrying about such things, having already worried for decades, no one seems to be doing much about this danger.
When you say that no one seems to be doing much, are you sure that’s not just because the efforts don’t get much publicity?
There is a lot that’s being done:
Most nuclear-armed governments have massively reduced their nuclear weapon stockpiles, and try to stop other countries getting nuclear weapons. There’s an international effort to track fissile material.
After the Cold War ended, the west set up programmes to employ Soviet nuclear scientists which have run until today (Russia is about to end them).
South Africa had nuclear weapons, then gave them up.
Israel destroyed the Iraqi and Syrian nuclear programmes with airstrikes. OK, self-interested, but existing nuclear states stop their enemies getting nuclear weapons then it reduces the risk of a nuclear war.
Somebody wrote the Stuxnet worm to attack Iran’s enrichment facilities (probably) and Iran is under massive international pressure not to develop nuclear weapons.
Western leaders are at least talking about the goal of a world without nuclear weapons. OK, probably empty rhetoric.
India and Pakistan have reduced the tension between them, and now keep their nuclear weapons stored disassembled.
The US is developing missile defences to deter ‘rogue states’ who might have a limited nuclear missile capability (although I’m not sure why the threat of nuclear retaliation isn’t a better deterrent than shooting down missiles). The Western world is paranoid about nuclear terrorism, even putting nuclear detectors in its ports to try to detect weapons being smuggled into the country (which a lot of experts think is silly, but I guess it might make it harder to move fissile material around on the black market).
etc. etc.
Sure, in the 100 year timeframe, there is still a risk. It just seems like a world with two ideologically opposed nuclear-armed superpowers, with limited ways to gather information and their arsenals on a hair trigger, was much riskier than today’s situation. Even when “rogue states” get hold of nuclear weapons, they seem to want them to deter a US/UN invasion, rather than to actually use offensively.
Now, having been embarrassed by falling for too many cries of wolf (or at least, for worrying prematurely), they are wary of being burned again.
This doesn’t appear to be the case at all. There are a variety of claimed existential risks which the intellectual elite are in general quite worried about. They just don’t overlap much with the kind of risks people here talk about. Global warming is an obvious example (and some people here probably think they’re right on that one) but the overhyped fears of SARS and H1N1 killing millions of people look like recent examples of lessons about crying wolf not being learned.
I don’t know about SARS, but in the case of H1N1 it wasn’t “crying wolf” so much as being prepared for a potential pandemic which didn’t happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn’t become as virulent as expected doesn’t mean that preparing for that eventuality was a waste of time.
Obviously the crux of the issue is whether the official probability estimates and predictions for these types of threats are accurate or not. It’s difficult to judge this in any individual case that fails to develop into a serious problem but if you can observe a consistent ongoing pattern of dire predictions that do not pan out this is evidence of an underlying bias in the estimates of risk. Preparing for an eventuality as if it had a 10% probability of happening when the true risk is 1% will lead to serious mis-allocation of resources.
It looks to me like there is a consistent pattern of overstating the risks of various catastrophes. Rigorously proving this is difficult. I’ve pointed to some examples of what look like over-confident predictions of disaster (there’s lots more in The Rational Optimist). I’m not sure we can easily resolve any remaining disagreement on the extent of risk exaggeration however.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how difficult it is to predict biological systems, I think it makes sense to treat the arrival of a new flu subtype with concern and for governments to set up contingency programmes. That’s not to say that the media didn’t hype swine flu and bird flu, but that doesn’t mean that the government preparations were an overreaction.
That’s not to say that some threats aren’t exaggerated, and others (low-probability, global threats like asteroid strikes or big volcanic eruptions) don’t get enough attention.
I wouldn’t put much trust in Matt Ridley’s abilities to estimate risk:
Mr Ridley told the Treasury Select Committee on Tuesday, that the bank had been hit by “wholly unexpected” events and he defended the way he and his colleagues had been running the bank.
“We were subject to a completely unprecedented and unpredictable closure of the world credit markets,” he said.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Well obviously. I refer you to my previous comment. At this point our remaining disagreement on this issue is unlikely to be resolved without better data. Continuing to go back and forth repeating that I think there is a pattern of overestimation for certain types of risk and that you think the estimates are accurate is not going to resolve the question.
Maybe at first, but I clearly recall that the hype was still ongoing even after it was known that this was a milder flu-version than usual.
And the reactions were not well designed to handle the flu either. One example is that my university installed hand sanitizers, well, pretty much everywhere. But the flu is primarily transmitted not from hand-to-hand contact, but by miniature droplets when people cough, sneeze, or just talk and breathe:
Spread of the 2009 H1N1 virus is thought to occur in the same way that seasonal flu spreads. Flu viruses are spread mainly from person to person through coughing, sneezing or talking by people with influenza. Sometimes people may become infected by touching something – such as a surface or object – with flu viruses on it and then touching their mouth or nose.
Wikipedia takes a more middle-of-the-road view, noting that it’s not entirely clear how much transmission happens in which route, but still:
The length of time the virus will persist on a surface varies, with the virus surviving for one to two days on hard, non-porous surfaces such as plastic or metal, for about fifteen minutes from dry paper tissues, and only five minutes on skin.
Which really suggests to me that hand-washing (or sanitizing) just isn’t going to be terribly effective. The best preventative is making sick people stay home.
Now, regular hand-washing is a great prophylactic for many other disease pathways, of course. But not for what the supposed purpose was.
I interpret what happened with H1N1 a little differently. Before it was known how serious it would be, the media started covering it. Now even given that H1N1 was relatively harmless, it is quite likely that similar but non-harmless diseases will appear in the future, so having containment strategies and knowing what works is important. By making H1N1 sound scary, they gave countries and health organizations an incentive to test their strategies with lower consequences for failure than there would be if they had to test them on something more lethal. The reactins make a lot more sense if you look at it as a large-scale training exercise. If people knew that it was harmless, they would’ve behaved differently and lowered the validity of the test..
It isn’t fully general; it only applies when the expected benefits (from lessons learned) exceed the costs of that particular kind of drill, and there’s no cheaper way to learn the same lessons.
Are you claiming that this was actually the plan all along? That our infinitely wise and benevolent leaders decided to create a panic irrespective of the actual threat posed by H1N1 for the purposes of a realistic training exercise?
If this is not what you are suggesting are you saying that although in fact this panic was an example of general government incompetence in the field of risk management it purely coincidentally turned out to be exactly the optimal thing to do in retrospect?
I have no evidence that would let me distinguish between these two scenarios. I also note that there’s plenty of middle ground—for example, private media companies could’ve decided to create an unjustified panic for ratings, while the governments and hospitals decided to make the best of it. Or more likely, the panic developed without anyone influential making a conscious decision to promote or suppress it either way.
Just because some institutions over-reacted or implemented ineffective measures, doesn’t mean that the concern wasn’t proportionate or that effective measures weren’t also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed (“Catch it, bin it, kill it”).
If anything, the government reaction was insufficient, because the phone system was delayed and the Tamiflu stockpiles were limited (although Tamiflu is apparently pretty marginal anyway, so making infected people stay at home was more important).
The media may have carried on hyping the threat after it turned out not to be so severe. They also ran stories complaining that the threat had been overhyped and the effort wasted. Just because the media or university administrators say stupid things about something, that doesn’t mean it’s not real.
Take the response to the avian flu outbreak in 2005. Dr David Nabarro, the UN systems coordinator for human and avian influenza, declared: ‘I’m not, at the moment, at liberty to give you a prediction on [potential mortality] numbers.’ He then gave a prediction on potential mortality numbers: ‘Let’s say, the range of deaths could be anything from five million to 150million.’ Nabarro should have kept his estimating prowess enslaved: the number of cases of avian flu stands at a mere 498, of which just 294 have proved fatal.
…
On 11 June 2009, just over a month after the initial outbreak in Mexico, the World Health Organisation finally announced that swine flu was now worthy of its highest alert status of level six, a global pandemic. Despite claims that there was no need to panic, that’s exactly what national health authorities did. In the UK, while the Department of Health was closing schools, politicians were falling over themselves to imagine the worst possible outcomes: second more deadly waves of flu, virus mutation – nothing was too far-fetched for it not to become a public announcement. This was going to be like the great Spanish Flu pandemic of 1918-20. But worse.
However, just as day follows nightmares, the dawning reality proved to be rather more mundane. By March 2010, nearly a full year after the H1N1 virus first began frightening the British government, the death toll stood not in the hundreds of thousands, but at 457. To put that into perspective, the average mortality rate for your common-or-garden flu is 600 deaths per year in a non-epidemic year and between 12,000 and 13,800 deaths per year in an epidemic year. In other words, far from heralding the imagined super virus, swine flu was more mild than the strains of flu we’ve lived with, and survived, for centuries. Reflecting on the hysteria which characterised the WHO’s response to Mexico, German politician Dr Wolfgang Wodarg told the WHO last week: ‘What we experienced in Mexico City was very mild flu which did not kill more than usual – which killed even less than usual.’
So Nabarro explicitly says that he’s talking about a possibility and not making a prediction, and ABC News reports it as a prediction. This seems consistent with the media-manufactured scare model.
Yeah, that provides some more examples. The elite was very worried about existential risks from nuclear war (“The Fate of the Earth”), resource shortages and mass starvation (“Club of Rome”), and technology-based totalitarianism (“1984”). Now, having been embarrassed by falling for too many cries of wolf (or at least, for worrying prematurely), they are wary of being burned again.
I don’t think worrying about nuclear war during the Cold War constituted either “crying wolf” or worrying prematurely. The Cuban Missile Crisis, the Able Archer 83 exercise (a year after “The Fate of the Earth” was published), and various false alert incidents could have resulted in nuclear war, and I’m not sure why anyone who opposed nuclear weapons at the time would be “embarrassed” in the light of what we now know.
I don’t think an existential risk has to be a certainty for it to be worth taking seriously.
In the US, concerns about some technology risks like EMP attacks and nuclear terrorism are still taken seriously, even though these are probably unlikely to happen and the damage would be much less severe than a nuclear war.
I agree. And nuclear war was certainly a risk that was worth taking seriously at the time.
However, that doesn’t make my last sentence any less true, especially if you replace “embarrassed” with “exhausted”. The risk of a nuclear war, somewhere, some time within the next 100 years, is still high—more likely than not, I would guess. It probably won’t destroy the human race, or even modern technology, but it could easily cost 400 million human lives. Yet, in part because people have become tired of worrying about such things, having already worried for decades, no one seems to be doing much about this danger.
When you say that no one seems to be doing much, are you sure that’s not just because the efforts don’t get much publicity?
There is a lot that’s being done:
Most nuclear-armed governments have massively reduced their nuclear weapon stockpiles, and try to stop other countries getting nuclear weapons. There’s an international effort to track fissile material.
After the Cold War ended, the west set up programmes to employ Soviet nuclear scientists which have run until today (Russia is about to end them).
South Africa had nuclear weapons, then gave them up.
Israel destroyed the Iraqi and Syrian nuclear programmes with airstrikes. OK, self-interested, but existing nuclear states stop their enemies getting nuclear weapons then it reduces the risk of a nuclear war.
Somebody wrote the Stuxnet worm to attack Iran’s enrichment facilities (probably) and Iran is under massive international pressure not to develop nuclear weapons.
Western leaders are at least talking about the goal of a world without nuclear weapons. OK, probably empty rhetoric.
India and Pakistan have reduced the tension between them, and now keep their nuclear weapons stored disassembled.
The US is developing missile defences to deter ‘rogue states’ who might have a limited nuclear missile capability (although I’m not sure why the threat of nuclear retaliation isn’t a better deterrent than shooting down missiles). The Western world is paranoid about nuclear terrorism, even putting nuclear detectors in its ports to try to detect weapons being smuggled into the country (which a lot of experts think is silly, but I guess it might make it harder to move fissile material around on the black market).
etc. etc.
Sure, in the 100 year timeframe, there is still a risk. It just seems like a world with two ideologically opposed nuclear-armed superpowers, with limited ways to gather information and their arsenals on a hair trigger, was much riskier than today’s situation. Even when “rogue states” get hold of nuclear weapons, they seem to want them to deter a US/UN invasion, rather than to actually use offensively.
Plus we invented the internet—greatly strengthening international relations—and creating social and economic interdependency.
This doesn’t appear to be the case at all. There are a variety of claimed existential risks which the intellectual elite are in general quite worried about. They just don’t overlap much with the kind of risks people here talk about. Global warming is an obvious example (and some people here probably think they’re right on that one) but the overhyped fears of SARS and H1N1 killing millions of people look like recent examples of lessons about crying wolf not being learned.
I don’t know about SARS, but in the case of H1N1 it wasn’t “crying wolf” so much as being prepared for a potential pandemic which didn’t happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn’t become as virulent as expected doesn’t mean that preparing for that eventuality was a waste of time.
Obviously the crux of the issue is whether the official probability estimates and predictions for these types of threats are accurate or not. It’s difficult to judge this in any individual case that fails to develop into a serious problem but if you can observe a consistent ongoing pattern of dire predictions that do not pan out this is evidence of an underlying bias in the estimates of risk. Preparing for an eventuality as if it had a 10% probability of happening when the true risk is 1% will lead to serious mis-allocation of resources.
It looks to me like there is a consistent pattern of overstating the risks of various catastrophes. Rigorously proving this is difficult. I’ve pointed to some examples of what look like over-confident predictions of disaster (there’s lots more in The Rational Optimist). I’m not sure we can easily resolve any remaining disagreement on the extent of risk exaggeration however.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how difficult it is to predict biological systems, I think it makes sense to treat the arrival of a new flu subtype with concern and for governments to set up contingency programmes. That’s not to say that the media didn’t hype swine flu and bird flu, but that doesn’t mean that the government preparations were an overreaction.
That’s not to say that some threats aren’t exaggerated, and others (low-probability, global threats like asteroid strikes or big volcanic eruptions) don’t get enough attention.
I wouldn’t put much trust in Matt Ridley’s abilities to estimate risk:
http://news.bbc.co.uk/1/hi/7052828.stm (yes, it’s the same Matt Ridley)
Well obviously. I refer you to my previous comment. At this point our remaining disagreement on this issue is unlikely to be resolved without better data. Continuing to go back and forth repeating that I think there is a pattern of overestimation for certain types of risk and that you think the estimates are accurate is not going to resolve the question.
Maybe at first, but I clearly recall that the hype was still ongoing even after it was known that this was a milder flu-version than usual.
And the reactions were not well designed to handle the flu either. One example is that my university installed hand sanitizers, well, pretty much everywhere. But the flu is primarily transmitted not from hand-to-hand contact, but by miniature droplets when people cough, sneeze, or just talk and breathe:
http://www.cdc.gov/h1n1flu/qa.htm
Wikipedia takes a more middle-of-the-road view, noting that it’s not entirely clear how much transmission happens in which route, but still:
http://en.wikipedia.org/wiki/Influenza
Which really suggests to me that hand-washing (or sanitizing) just isn’t going to be terribly effective. The best preventative is making sick people stay home.
Now, regular hand-washing is a great prophylactic for many other disease pathways, of course. But not for what the supposed purpose was.
I interpret what happened with H1N1 a little differently. Before it was known how serious it would be, the media started covering it. Now even given that H1N1 was relatively harmless, it is quite likely that similar but non-harmless diseases will appear in the future, so having containment strategies and knowing what works is important. By making H1N1 sound scary, they gave countries and health organizations an incentive to test their strategies with lower consequences for failure than there would be if they had to test them on something more lethal. The reactins make a lot more sense if you look at it as a large-scale training exercise. If people knew that it was harmless, they would’ve behaved differently and lowered the validity of the test..
This looks like a fully general argument for panicking about anything.
It isn’t fully general; it only applies when the expected benefits (from lessons learned) exceed the costs of that particular kind of drill, and there’s no cheaper way to learn the same lessons.
Are you claiming that this was actually the plan all along? That our infinitely wise and benevolent leaders decided to create a panic irrespective of the actual threat posed by H1N1 for the purposes of a realistic training exercise?
If this is not what you are suggesting are you saying that although in fact this panic was an example of general government incompetence in the field of risk management it purely coincidentally turned out to be exactly the optimal thing to do in retrospect?
I have no evidence that would let me distinguish between these two scenarios. I also note that there’s plenty of middle ground—for example, private media companies could’ve decided to create an unjustified panic for ratings, while the governments and hospitals decided to make the best of it. Or more likely, the panic developed without anyone influential making a conscious decision to promote or suppress it either way.
Just because some institutions over-reacted or implemented ineffective measures, doesn’t mean that the concern wasn’t proportionate or that effective measures weren’t also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed (“Catch it, bin it, kill it”).
If anything, the government reaction was insufficient, because the phone system was delayed and the Tamiflu stockpiles were limited (although Tamiflu is apparently pretty marginal anyway, so making infected people stay at home was more important).
The media may have carried on hyping the threat after it turned out not to be so severe. They also ran stories complaining that the threat had been overhyped and the effort wasted. Just because the media or university administrators say stupid things about something, that doesn’t mean it’s not real.
SARS and H1N1 both looked like media-manufactured scares, rather than actual concern from the intellectual elite.
It wasn’t just the media:
So Nabarro explicitly says that he’s talking about a possibility and not making a prediction, and ABC News reports it as a prediction. This seems consistent with the media-manufactured scare model.