Is somebody keeping track of the “what if we’re wrong and it turns out this is another Y2K” scenario? Social distancing, closing borders, heightened awareness and preventative measures—seems like a lot is happening that could make this way less scary, at least in the US, than most of the mainstream scenarios.
I’m not interested in the hearing from denier-types who think this is “just another flu”, but rather the thoughtful people who have specific testable predictions that would demonstrate this is more social contagion than most of us suspect.
Is somebody keeping track of the “what if we’re wrong and it turns out this is another Y2K” scenario?
That is, a combination of “prevention work successfully means no big disasters” and “absence of prevention work doesn’t cause any major disasters”? I think that cat is already out of the bag on the latter one; people might end up disagreeing on whether it was better to be in Iran or Wuhan, but they won’t be able to disagree that the lockdown in Wuhan had an effect.
I think there will be variation in what sorts of social distancing happen, which we should be able to back out data on, and similarly demonstrate that social distancing had an effect. (I expect it’ll be smaller than many people hope it’ll be, but it’ll still be noticeable.) Like, we could see the effect in 1918 influenza data, and we have a much better ability now to track how people come into contact with each other.
[I expect the main thing to happen is that people take insufficient protective measures, which makes them look like a waste, or we get stuff like “ah look, extensive social distancing meant the peak happened two weeks later!”, which is of unclear value compared to the costs.]
Those arguments make sense, but for example what if despite our best modeling, the cases just start to decline and then the whole thing just disappears in a month? At what point would we have to seriously re-evaluate everything we know about this virus? Say new cases plunge 90% next week? 50%?
Scenario planners try to think of every possible alternative, including those that seem far-fetched. I’m trying to figure out what the positive alternatives would look like.
How would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn’t. It isn’t going to happen.
If a miracle happens, then a miracle happens. I’m not holding my breath.
The ways in which I do expect Vaniver_2021 to look back at Vaniver_2020 and think “yeah, he was worried about that but it didn’t turn out to be relevant” are various unknowns about the virus that might be fine or might be bad. For example, we don’t know how bad surface transmission will be, but that’s a big factor in what sort of isolation protocols you need to have. We don’t know whether existing anti-virals will be effective. We don’t know how long immunity will last, but that’s a big factor in whether or not ‘herd immunity’ strategies will work, and how valuable it is to not catch it. We don’t know how big a deal antibody-dependent enhancement will be, or how that will interact with the duration of immunity. We don’t know what long-term effects of infection (think fatigue, disability, infertility, etc.) look like. We don’t know how long people are infectious before they show noticeable symptoms.
For all of those things, I put significant probability on the “it’s fine” side of the uncertainty. But it being not fine is quite bad compared to it being fine, such that the expected utility shakes out that I should take it seriously until we know more. For example, I now think that if you’re taking your temperature every day, the “infectious before noticeable symptoms” window is probably about a day, which seems pretty tolerable, but don’t think I made a mistake in my assessment before. If the long-term disability risk turns out to be closer to 1% than 10%, then I’ll adjust my prior on long-term disability for next time (in the obvious way that I’ll have two datapoints instead of one), but I won’t think “oh, I cried wolf.”
I thought that we were right about Y2K, people spent a lot of time preparing for it, and their hard work saved us all. Is that wrong? (I understand if you just link to somewhere else and don’t clutter up your thread any further with this digression.)
According to some as summarized by wikipedia, there’s not all that much evidence that people who didn’t prepare were bitten by it, or that fixing ahead of time was cheaper / better than fix-on-failure.
I mean Y2K in the sense of lots of fretting about something that turns out not to happen, whether because of significant preparation or from just being wrong about the urgency.
I realize it doesn’t seem likely, but in the spirit of humility before humanity’s collective ignorance, how might we know we were wrong? Like, clearly nobody’s expecting US cases to suddenly level off and then disappear, but what if that happens anyway? At what point would we say we were just totally wrong?
Is somebody keeping track of the “what if we’re wrong and it turns out this is another Y2K” scenario? Social distancing, closing borders, heightened awareness and preventative measures—seems like a lot is happening that could make this way less scary, at least in the US, than most of the mainstream scenarios.
I’m not interested in the hearing from denier-types who think this is “just another flu”, but rather the thoughtful people who have specific testable predictions that would demonstrate this is more social contagion than most of us suspect.
That is, a combination of “prevention work successfully means no big disasters” and “absence of prevention work doesn’t cause any major disasters”? I think that cat is already out of the bag on the latter one; people might end up disagreeing on whether it was better to be in Iran or Wuhan, but they won’t be able to disagree that the lockdown in Wuhan had an effect.
I think there will be variation in what sorts of social distancing happen, which we should be able to back out data on, and similarly demonstrate that social distancing had an effect. (I expect it’ll be smaller than many people hope it’ll be, but it’ll still be noticeable.) Like, we could see the effect in 1918 influenza data, and we have a much better ability now to track how people come into contact with each other.
[I expect the main thing to happen is that people take insufficient protective measures, which makes them look like a waste, or we get stuff like “ah look, extensive social distancing meant the peak happened two weeks later!”, which is of unclear value compared to the costs.]
Those arguments make sense, but for example what if despite our best modeling, the cases just start to decline and then the whole thing just disappears in a month? At what point would we have to seriously re-evaluate everything we know about this virus? Say new cases plunge 90% next week? 50%?
Scenario planners try to think of every possible alternative, including those that seem far-fetched. I’m trying to figure out what the positive alternatives would look like.
From A Technical Explanation of Technical Explanation:
If a miracle happens, then a miracle happens. I’m not holding my breath.
The ways in which I do expect Vaniver_2021 to look back at Vaniver_2020 and think “yeah, he was worried about that but it didn’t turn out to be relevant” are various unknowns about the virus that might be fine or might be bad. For example, we don’t know how bad surface transmission will be, but that’s a big factor in what sort of isolation protocols you need to have. We don’t know whether existing anti-virals will be effective. We don’t know how long immunity will last, but that’s a big factor in whether or not ‘herd immunity’ strategies will work, and how valuable it is to not catch it. We don’t know how big a deal antibody-dependent enhancement will be, or how that will interact with the duration of immunity. We don’t know what long-term effects of infection (think fatigue, disability, infertility, etc.) look like. We don’t know how long people are infectious before they show noticeable symptoms.
For all of those things, I put significant probability on the “it’s fine” side of the uncertainty. But it being not fine is quite bad compared to it being fine, such that the expected utility shakes out that I should take it seriously until we know more. For example, I now think that if you’re taking your temperature every day, the “infectious before noticeable symptoms” window is probably about a day, which seems pretty tolerable, but don’t think I made a mistake in my assessment before. If the long-term disability risk turns out to be closer to 1% than 10%, then I’ll adjust my prior on long-term disability for next time (in the obvious way that I’ll have two datapoints instead of one), but I won’t think “oh, I cried wolf.”
I thought that we were right about Y2K, people spent a lot of time preparing for it, and their hard work saved us all. Is that wrong? (I understand if you just link to somewhere else and don’t clutter up your thread any further with this digression.)
According to some as summarized by wikipedia, there’s not all that much evidence that people who didn’t prepare were bitten by it, or that fixing ahead of time was cheaper / better than fix-on-failure.
I mean Y2K in the sense of lots of fretting about something that turns out not to happen, whether because of significant preparation or from just being wrong about the urgency.
I realize it doesn’t seem likely, but in the spirit of humility before humanity’s collective ignorance, how might we know we were wrong? Like, clearly nobody’s expecting US cases to suddenly level off and then disappear, but what if that happens anyway? At what point would we say we were just totally wrong?