Thanks! This is exactly the kind of toy model I thought would help move these discussions forward.
The part I’m most suspicious of is the model of the control system. I have written a Colab notebook exploring the issue in some detail, but briefly:
If you run the control system model on the past (2020), it vastly over-predicts R.
This is true even in the very recent past, when pandemic fatigue should have “set in.”
Of course, by your assumptions, it should over-predict past R to some extent. Because we now have pandemic fatigue, and didn’t then.
However:
It seems better to first propose a model we know can match past data, and then add a tuning term/effect for “pandemic fatigue” for future prediction.
Because this model can’t predict even the very recent past, it’s not clear it models anything we have observed about pandemic fatigue (ie the observations leading us to think pandemic fatigue is happening).
Instead, it effectively assumes a discontinuity at 12/23/20, where a huge new pandemic fatigue effect turns on. This effect only exists in the future; if it were turned on in the past, it would have swamped all other factors.
To get a sense of scale, here is one of the plots from my notebook:
The colored points show historical data on R vs. the 6-period average, with color indicating the date.
The first thing that stands out is that these two variables are not even approximately in a one-to-one relationship.
The second thing that stands out is that, if you were to fit some one-to-one relationship anyway, it would be very different from the toy model here.
Third thing: the toy model’s baseline R is anchored to the “top of a hill” on a curve that has been oscillating quickly. With an exponent of zero, it would stay stuck at the top of the recent hills, i.e. it would still over-predict the recent past. (With a positive exponent, it shoots above those hills.)
More general commentary on the issue:
It seems like you are
… first, assuming that the control system sets R to infections
… then, observing that we still have R~1 (as always), despite a vast uptick in infections
… then, concluding that the control system has drastically changed all of a sudden, because that’s the only way to preserve the assumption (1)
Whereas, it seems more natural to take (3) as evidence that (1) was wrong.
In other words, you are looking at a mostly constant R (with a slight sustained recent upswing), and concluding that this lack of a change is actually the result of two large changes that cancel out:
Control dynamics that should make R go down
A new discontinuity in control dynamics that conspires to exactly cancel #1, preserving a ~constant R
When R has been remarkably constant the whole time, I’m suspicious of introducing a sudden “blast” of large changes in opposing directions that net out to R still staying constant. What evidence is there for this “blast”?
(The recent trajectory of R is not evidence for it, as discussed above: it’s impossible to explain recent R with these forces in play. They have to have have suddenly appeared, like a mean Christmas present.)
My model of the R/cases trends is something like:
“R is always ~1 with noise/oscillations”
“cases are exponential in R, so when the noise/oscillations conspire upwards for a while, cases blow up”
The missing piece is what sets the noise/oscillations, because if we can control that we can help. However, any model of the noise/oscillations must calibrate them so it reproduces 2020′s tight control around R~1.
This tight control was a surprise and is hard to reproduce in a model, but if our model doesn’t reproduce it, we will go on being surprised by the same thing that surprised us before.
The colored points show historical data on R vs. the 6-period average, with color indicating the date.
Thanks for actually plotting historical Rt vs infection rates!
Whereas, it seems more natural to take (3) as evidence that (1) was wrong.
In my own comment, I also identified the control system model of any kind of proportionality of Rt to infections as a problem. Based on my own observations of behaviour and government response, the MNM hypothesis seems more likely (governments hitting the panic button as imminent death approaches, i.e. hospitals begin to be overwhelmed) than a response that ramps up proportionate to recent infections. I think that explains the tight oscillations.
I’d say the dominant contributor to control systems is something like a step function at a particular level near where hospitals are overwhelmed, and individual responses proportionate to exact levels of infection are a lesser part of it.
You could maybe operationalize this by looking at past hospitalization rates, fitting a logistic curve to them at the ‘overwhelmed’ threshold and seeing if that predicts Rt. I think it would do pretty well.
This tight control was a surprise and is hard to reproduce in a model, but if our model doesn’t reproduce it, we will go on being surprised by the same thing that surprised us before.
My own predictions are essentially based on continuing to expect the ‘tight control’ to continue somehow, i.e. flattening out cases or declining a bit at a very high level after a large swing upwards.
It looks like (subsequent couple of days data seem to confirm this), Rt is currently just below 1 in London—which would outright falsify any model that claims Rt never goes below 1 for any amount of infection with the new variant, given our control system response, which according to your graph, the infections exponential model does predict.
If you ran this model on the past, what would it predict? Based on what you’ve said, Rt never goes below one, so there would be a huge first wave with a rapid rise up to partial herd immunity over weeks, based on your diagram. That’s the exact same predictive error that was made last year.
I note—outside view—that this is very similar to the predictive mistake made last Febuary/March with old Covid-19 - many around here were practically certain we were bound for an immediate (in a month or two) enormous herd immunity overshoot.
Based on what you’ve said, Rt never goes below one
You’re saying nostalgebraist says Rt never goes below 1?
I interpreted “R is always ~1 with noise/oscillations” to mean that it could go below 1 temporarily. And that seems consistent with the current London data. No?
I meant, ‘based on what you’ve said about Zvi’s model’ I.e. Nostalgebraist says zvi says Rt never goes below 1 - if you look at the plot he produced Rt is always above 1 given Zvi’s assumptions, which the London data falsified.
Thanks! This is exactly the kind of toy model I thought would help move these discussions forward.
The part I’m most suspicious of is the model of the control system. I have written a Colab notebook exploring the issue in some detail, but briefly:
If you run the control system model on the past (2020), it vastly over-predicts R.
This is true even in the very recent past, when pandemic fatigue should have “set in.”
Of course, by your assumptions, it should over-predict past R to some extent. Because we now have pandemic fatigue, and didn’t then.
However:
It seems better to first propose a model we know can match past data, and then add a tuning term/effect for “pandemic fatigue” for future prediction.
Because this model can’t predict even the very recent past, it’s not clear it models anything we have observed about pandemic fatigue (ie the observations leading us to think pandemic fatigue is happening).
Instead, it effectively assumes a discontinuity at 12/23/20, where a huge new pandemic fatigue effect turns on. This effect only exists in the future; if it were turned on in the past, it would have swamped all other factors.
To get a sense of scale, here is one of the plots from my notebook:
https://64.media.tumblr.com/823e3a2f55bd8d1edb385be17cd546c7/673bfeb02b591235-2b/s640x960/64515d7016eeb578e6d9c45020ce1722cbb6af59.png
The colored points show historical data on R vs. the 6-period average, with color indicating the date.
The first thing that stands out is that these two variables are not even approximately in a one-to-one relationship.
The second thing that stands out is that, if you were to fit some one-to-one relationship anyway, it would be very different from the toy model here.
Third thing: the toy model’s baseline R is anchored to the “top of a hill” on a curve that has been oscillating quickly. With an exponent of zero, it would stay stuck at the top of the recent hills, i.e. it would still over-predict the recent past. (With a positive exponent, it shoots above those hills.)
More general commentary on the issue:
It seems like you are
… first, assuming that the control system sets R to infections
… then, observing that we still have R~1 (as always), despite a vast uptick in infections
… then, concluding that the control system has drastically changed all of a sudden, because that’s the only way to preserve the assumption (1)
Whereas, it seems more natural to take (3) as evidence that (1) was wrong.
In other words, you are looking at a mostly constant R (with a slight sustained recent upswing), and concluding that this lack of a change is actually the result of two large changes that cancel out:
Control dynamics that should make R go down
A new discontinuity in control dynamics that conspires to exactly cancel #1, preserving a ~constant R
When R has been remarkably constant the whole time, I’m suspicious of introducing a sudden “blast” of large changes in opposing directions that net out to R still staying constant. What evidence is there for this “blast”?
(The recent trajectory of R is not evidence for it, as discussed above: it’s impossible to explain recent R with these forces in play. They have to have have suddenly appeared, like a mean Christmas present.)
My model of the R/cases trends is something like:
“R is always ~1 with noise/oscillations”
“cases are exponential in R, so when the noise/oscillations conspire upwards for a while, cases blow up”
The missing piece is what sets the noise/oscillations, because if we can control that we can help. However, any model of the noise/oscillations must calibrate them so it reproduces 2020′s tight control around R~1.
This tight control was a surprise and is hard to reproduce in a model, but if our model doesn’t reproduce it, we will go on being surprised by the same thing that surprised us before.
Thanks for actually plotting historical Rt vs infection rates!
In my own comment, I also identified the control system model of any kind of proportionality of Rt to infections as a problem. Based on my own observations of behaviour and government response, the MNM hypothesis seems more likely (governments hitting the panic button as imminent death approaches, i.e. hospitals begin to be overwhelmed) than a response that ramps up proportionate to recent infections. I think that explains the tight oscillations.
You could maybe operationalize this by looking at past hospitalization rates, fitting a logistic curve to them at the ‘overwhelmed’ threshold and seeing if that predicts Rt. I think it would do pretty well.
My own predictions are essentially based on continuing to expect the ‘tight control’ to continue somehow, i.e. flattening out cases or declining a bit at a very high level after a large swing upwards.
It looks like (subsequent couple of days data seem to confirm this), Rt is currently just below 1 in London—which would outright falsify any model that claims Rt never goes below 1 for any amount of infection with the new variant, given our control system response, which according to your graph, the infections exponential model does predict.
If you ran this model on the past, what would it predict? Based on what you’ve said, Rt never goes below one, so there would be a huge first wave with a rapid rise up to partial herd immunity over weeks, based on your diagram. That’s the exact same predictive error that was made last year.
You’re saying nostalgebraist says Rt never goes below 1?
I interpreted “R is always ~1 with noise/oscillations” to mean that it could go below 1 temporarily. And that seems consistent with the current London data. No?
I meant, ‘based on what you’ve said about Zvi’s model’ I.e. Nostalgebraist says zvi says Rt never goes below 1 - if you look at the plot he produced Rt is always above 1 given Zvi’s assumptions, which the London data falsified.
Rt can go below one in Zvi’s model. It just takes an even higher rate of new infections.
Here’s the same picture, with the horizontal axis extended so this is visible: https://64.media.tumblr.com/008005269202c21313ef5d5db6a8a4c6/83a097f275903c4c-81/s2048x3072/7b2e6e27f1fb7ad57ac0dcc6bd61fce77a18a2c1.png
Of course, in the real world, Rt dips below one all the time, as you can see in the colored points.
As a dramatic example, Zvi’s model is predicting the future forward from 12/23/20. But a mere week before that date, Rt was below one!