Previous week: My Covid-19 Thinking: 4⁄17
Other previous foundational Covid-19 thoughts here: On R0, Taking Initial Viral Load Seriously, Seemingly Popular Covid-19 Model is Obvious Nonsense
Spreadsheet I’m using to look at data is here, if you’d like to look.
Epistemic Status: Usual warning that I’m not an expert, just someone trying to figure things out as best he can, and am doubtless making tons of mistakes.
I’m going to try making this a weekly update while there are interesting things to say. I’ll update these thoughts as the week goes by, then publish each Friday.
There’s a whole bunch of different stuff going on. The sections play into my whole picture, but they’re also mostly distinct, so you can skip anything you don’t care about. This is the things I felt were sufficiently relevant to write down.
This is a weird post, because as I was finishing it up, Cuomo came out with antibody test results and that’s going to change everything. So these thoughts are what I got before the antibody tests and before the 4⁄23 data. Next time I’ll deal with the antibody results in full.
What Is Happening?
New York’s situation is rapidly improving. Other places in America, not so much. At a minimum, not so clear.
Previously I’ve been grouping New York and New Jersey, since New Jersey has the second-most infections and is right next to New York.
However, with New Jersey not getting better and New York rapidly improving, this no longer makes as much sense. I’m going to isolate only New York. I’ll still have the New Jersey columns and Ex-NJ columns, but I’ll move them off to the right in the spreadsheet.
New York alone peaked its positive test rate around 50%, and it’s now down to 26.8%. I don’t think we went too far above the point where test positive numbers stop increasing, and our testing is modestly increased since then but not dramatically so. Clearly things are much better. The naive calculation says down from peak by half, the second-level one says it’s somewhat more than that but hard to tell how much more.
We cut things somewhat more than in half in 22 days. If we assume serial interval of 5 days, which seems like the minimum, that’s 4.4 intervals. Then we get an average R0 for this period of about 0.85. Serial interval might be substantially longer, but can’t be much shorter. Our reasonably confident bounds are therefore something like 0.75 to 0.9. The more I look at it, the harder it is to be at any other number.
What happens if we use this to try and back out the situation?
If we start with 474 deaths on 4⁄22 and assume a 50% undercount (to 717) and 23 day delay and a 1% death rate, we get on 3⁄30 there being something like 71,700 infections on that day, which was around peak of positive tests, so we can maybe say that New York had about 70,000 infections per day at the peak, then, as an alternative calculation method that now seems available to us. Then we use the ratios of test results of different days, and a maximally generous 3.5 day doubling time (I tried 2.5 first and the problems below were even worse) before the lock down, and see what pops out.
That gives us 1.92 million infections so far, and 32,161 real infections today. It only gives us 1.04mm infections or so that have existed for 23 days, and the number of deaths is now 14,828 even without an under-count, which implies a 2% IFR with the same adjustment for undercount. Check sum fail, since we assumed 1%. We’d need to split the difference in some form and accept ~1.5% IFR due to hospitals being overwhelmed during this period, or adjust our assumptions somewhere.
Alternatively, we can move the doubling time back to 2.5 days, force the IFR to be 1% based on that second calculation – the infected 23 days ago are the potentially dead now – and use that to generate the infection counts. That seems more promising, since the check sum should pass, and we’re now less sensitive to day-to-day data fluctuations.
That gets us 4.3 million infected in New York State total, and today’s number of infected at on 4⁄22 at 70,910. The one day peak was then around 150,000. We would then expect (if current policy holds until full squashing and there are no herd immunity adjustments) to see another 2.3 million infected from here, for 6.6 million total. We’d expect 66,000 deaths of which somewhat more than 44,000 would get reported.
That still doesn’t explain the peak in deaths being so early on 4⁄9, as 23 days before that was 3⁄16, and there’s actual no way in hell we peaked new infections that early. We could claim the real peak in deaths was 4⁄14, though, on the theory that hospitals were overwhelmed and thus the under counts got bigger. If we do that we get peak infections at 3⁄21, which starts to make more sense. More likely, this new 23 days number I picked up as consensus is a little too high, and it’s more like the old 21 days. Let’s make that adjustment instead, which gives us two more days of infections to help account for current deaths, and thus reduces our numbers a little. We can cut our numbers by ~15%
Doing everything again I get 3.72 million current infections and 1.92 future infections, which means a total of 56,400 deaths in New York for this wave if we don’t open prematurely.
That would make the infection rate in New York State overall about 19% now and 29% when things are finished. We’d have identified 13.3% of positive cases as of right now, with vastly inadequate testing.
This would also imply New York City was roughly 30% infected, which is on the extreme low end of my previous prior guess range.
All right, that story seems reasonable, with room to adjust it a bit if some of the assumptions prove wrong. But it does seem like it broadly fits before we see the antibody numbers.
Antibody tests came back 13.85% positive. 21.2% for NYC (43% of state population), 14.4% for Long Island (16.7% of state population), 9.8% for Westchester and Rockland (11.7% of state), then 3.6% for everywhere else (32.8% of state population).
We’ll pick up from there next time. The stories don’t seem that incompatible, but there’s definitely a gap to explain/reconcile.
Now, some still-relevant thoughts I wrote down earlier in the week.
Model Talk
Seemingly Popular Covid-19 Model is Now Even More Obvious Nonsense
There was a headline on 4⁄22 that experts are warning that ‘leading model projects 66,000 deaths, up 10% from before, so we should wait longer to reopen.’ This is, of course, obviously insane. The 10% rise is a blip. It represents three or four days of deaths at current rates (before adjusting for undercounts).
Meanwhile, the projection of 66k deaths comes with almost 46k deaths already observed and the highest one day death count being yesterday 4⁄21, as of 4⁄22. How is this not obviously utterly insane? Why is anyone using this for anything? That’s not only impossible, it’s obvious-to-the-naked-eye impossible.
I wonder how I should update about calling things out for being wrong. In general I think calling things out for being wrong is a mistake, hence my proviso that I expected to regret it. But it is now clear that this needed to be called out much faster and louder. I need to figure out the right policy. The Stat News takedown piece on this study seems to have been net useful despite being kind of terrible.
University of Texas Confident Death Peak Passed Before It Did, What’s Happening There?
Several people who seem to be at least somewhat trying to think about Covid-19 have pointed to the University of Texas projections. When updated on 4⁄20, they had a 99% chance we were passed the peak in deaths. That number had been 99% for several days already, so it was likely much more confident than that but not showing extra signifigant figures. Then on 4⁄21 we had the highest one day death count so far.
Here is their brief explanation of what they are doing:
Key model assumptions: (1) The observed and projected numbers reflect confirmed COVID-19 deaths only. (2) The model estimates the extent of social distancing using geolocation data from mobile phones and assumes that the extent of social distancing does not change during the period of forecasting. (3) The model is designed to predict deaths resulting from only a single wave of COVID-19 transmission and cannot predict epidemiological dynamics resulting from a possible second wave.
The peak is defined as the day on which our model’s prediction for the average daily death rate stops increasing and begins to decrease.
For detailed technical information please view the Report: UT COVID-19 Mortality Forecasting Model.
They define peak as the model’s projection of the peak, rather than the actual peak. Which makes it a lot easier to be confident in something! Especially when a ‘second wave’ does not count, which you can interpret any number of ways. Still, it seems highly misleading and the people using the model are not going to interpret or use it the technically correct way.
In some ways I am sympathetic to ‘people are not using the model properly’ but I’m more sympathetic to ‘people are using the model exactly the way my model of people using models says people will use models given what we released and how we talked about it’ so maybe it’s on you to fix that or accept the responsibility for the consequences.
The approach of the Texas model is a direct response to criticism of the Nonsense Model (technically it’s called the IHME model) and the assumption that social distancing would be effective. Here’s their more technical summary of what’s going on:
In light of the popular appeal of the IHME model and considerable scrutiny
from the scientific community, we have developed an alternative curve-fitting
method for forecasting COVID-19 mortality throughout the US. Our model is
similar in spirit to the IHME model, but different in two important details.
1. For each US state, we use local data from mobile-phone GPS traces made
available by SafeGraph to quantify the changing impact of social-distancing
measures on “flattening the curve.” SafeGraph is a data company that aggregates anonymized location data from numerous applications in order
to provide insights about physical places. To enhance privacy, SafeGraph
excludes census block group information if fewer than five devices visited
an establishment in a month from a given census block group.
2. We reformulated the approach in a generalized linear model framework to
correct a statistical flaw that leads to the underestimation of uncertainty in
the IHME forecasts.
The incorporation of real-time geolocation data and several key modifications
yields projections that differ noticeably from the IHME model, especially regarding uncertainty when projecting COVID-19 deaths several weeks into the
future.
That’s… an improvement, I guess?
Number two is certainly good news as far as it goes, but doesn’t go close to far enough. The Texas model still obviously has way too little uncertainty.
The problem is the models are only counting the uncertainty from some sources, such as random fluctuations. They’re not taking into account potential model error, or systematic things of any kind that might be going on that they’ve missed, or the general ability of the world to surprise us. They’re not properly taking into account uncertainty over the quality of our tests and tools, when that matters, which is what sunk the study in Santa Clara. Nor are they using common sense.
Again, there’s nothing technically wrong with saying something like “If my assumptions and calculations are correct, then the confidence interval would be…” in a world in which your assumptions are not going to be correct (as long as you don’t include the inevitable “and they are…”) but it’s going to give a very false impression to essentially everyone.
There are some very nice graphs of cell phone location data in the paper that’s linked to in their summary. It would be great if there was a way to look at that data more generally, seems pretty useful. And it is! Looking forward (as of writing this section) to doing a deep dive on that soon. Many thanks for that, University of Texas.
Essentially, this seems like it takes a model full of obvious nonsense, notices some of that obvious nonsense, and replaces it with things that are less obviously nonsense slash less nonsensical.
It still ends up hyper overconfident. It still has a curve downwards that looks symmetrical to the curve upwards, which is not going to happen – common sense says that if R0 was 3 before, the only way to decline at the same rate is to get R0 down to 1⁄3, and that is obviously, obviously not something that happened. We’d see it in the data.
In practice, I think that using the Texas model will result in slightly better decisions than the full nonsense model, but it is still very much better off ignored. Except for its data source, which seems highly useful.
How to Even Set Up The SEIRD Epidemic Model for Covid-19?
A paper plausibly attempted to address that question on 4⁄20, so I figured I’d go through it. As my readers know, I’ve been very critical of the SEIR framework. In particular, I hate the assumption that everyone is the same and has the same interactions and exposures. That assumption seems to be sufficiently wrong as to make the model results bear little resemblence to reality.
Here’s the paper abstract, to which I added paragraph breaks:
This paper studies the SEIRD epidemic model for COVID-19. First, I show that the
model is poorly identified from the observed number of deaths and confirmed cases. There are many sets of parameters that are observationally equivalent in the short run but lead to markedly different long run forecasts.Next, I show that the basic reproduction number R0 can be identified from the data, conditional on the clinical parameters. I then estimate it for the US and several other countries and regions, allowing for possible underreporting of the number of cases.
The resulting estimates of R0 are heterogeneous across countries: they are 2-3 times higher for Western countries than for Asian countries. I demonstrate that if one fails to take underreporting into account and estimates R0 from the reported cases data, the resulting estimate of R0 will be biased downward and the resulting forecasts will exaggerate the number of deaths.
Finally, I demonstrate that auxiliary information from random tests can be used to calibrate the initial parameters of the model and reduce the range of possible forecasts about the future number of deaths.
I still see no mention of people not being identical within a region. We do see the acknowledgment that different regions are dramatically different.
Still, at the beginning of the epidemic, when approximately no one is immune, the calculations should be close enough to reality to be useful despite this huge error. Getting the parameters right, especially initial R0, seems central to the puzzle, so attempts to do that seem good even in the service of something that will over time become nonsense.
Interestingly this paper seems to think R0’s plausible range is much higher in the USA than previous estimates, partly because it’s much higher here than in Asia:
There is no agreement in the medical literature on the length of the
incubation and infectious period for COVID-19, different values of these parameters result in the estimates of R0 for the US that range from 3.75 to 11.6.
This later gets narrowed down a bunch by using Iceland’s pseudo-random testing data.
An initial R0 that is much higher than the 4.0 I was using would help explain why containment is proving difficult. The paper claims that all these initial R0s are able to fit the data seen so far, and yield similar short-term models in terms of observed variables. Given how many things depend on so many other things, that seems like it’s a failure to observe enough variables?
It’s good to get concrete attempts to get the range of possible values for key variables. The paper claims average incubation is from 3 to 5 days, and that average disease duration is from 5 to 18 days, if you take the high and low estimates among estimates in the literature. That doesn’t seem like a safe way to get bounds on variables, but it’s a start.
The input parameters are confirmed cases and deaths. I would like to see systematic attempts to model true positive rates from positive test rates, since it’s clearly useful information and it is being ignored in these types of models.
This would hopefully solve the problem that is next pointed out, which is that if you can vary the initial case count, observable fraction of infections (oh boy the constants we’re assuming), R0 and CFR, you can get scenarios that look identical for a long time. You only find out the difference later, when you hit herd immunity effects. As I’ve said, I think those effects happen sooner, which would make the differences also show up sooner – if we don’t see New York’s implied R0 dropping substantially, that’s evidence against my interpretation of events and we’ll need to figure out where the mistake was, whereas if New York’s R0 seems to be continuing to drop a bunch then that would be evidence for my interpretation.
And of course, we’d see different responses to social distancing and other ways of making R0 drop. Reducing R0 by 75% via distancing without any substantial herd immunity, and ending up with a steady infection rate, tells you R0 = 4 initially. If you end up with dropping infections, it was less. If you end up with increasing infections, it was more. Not that the 75% number is easy to measure.
I’m sad that there haven’t been more concrete attempts (at least that are known to me) to do this type of physical modeling. A lot of variables have tight relationships with other variables, so having even reasonable bounds on how much we cut R0 in percentage terms would be a big game. But of course, there are tons and tons of opportunities for good data and we miss out on almost all of them.
(I’d also note that I hate that papers reguarly use tons of greek letters to represent their variables instead of repeatedly saying what they are slash not producing an easy glossary; I know it’s easier from a technical perspective, but from a layman’s perspective it makes life much more annoying.)
In theory he claims we can still figure out R0 from the curvature of the numbers. All we have to do is know the incubation time, the infectiousness time and that the numbers are only off by constant factors – if deaths or infections are always off by a fixed percentage that’s all right, as long as it’s a constant. It’s weird that the model uses incubation time and infectiousness time rather than serial interval, since those two variables don’t seem like they determine serial interval and that’s essentially what they’re being used for. The relative infectiousness in different periods, and relative patterns of behavior, matter, which is another thing being ignored.
Section 5 reports his estimation methods. It is amsuing that he notes there are no error terms in SEIR models, and that thus he can’t match the observed data exactly, so he just tries to do curve fitting. Given the circumstances, I’m willing to accept this is, if not the best we can do, at least a reasonable attempt to 80⁄20 it rather than dive further.
He gets very high R in the USA, for example in his “medium” scenario he gets 6.5 overall and 7 for New York, as opposed to 2 for Japan. The problem is that 6.5 and 7 are very close to the same number in practice, so it does not explain how things got so out of hand in New York and not elsewhere. Nor does it seem big enough for the obvious physical handicapps New Yorkers face in trying not to get infected. Density is much, much higher and presents tons of unique obstacles. Whereas the 2 for Japan is dramatically different and it seems hard to believe that doesn’t involve a lot of baseline distancing and prevention if it’s remotely real. Can this be apples to apples?
Social distancing’s effect is beyond the scope of the paper so the projections it gives into the future don’t matter here, and I’m ignoring them entirely. The Iceland note is only for the purpose of pointing out that random sample testing would allow us to narrow our hypothesis space. That seems to be far more true in reality than with the model, because reality allows you to use the answer from random testing to pin a lot of other things down and allow that to cascade.
Mostly this paper seemed like a cautionary tale with additional things that are wrong with SEIR frameworks in the context of Covid-19. They have tons of variables and it takes a long time into the progression of the epidemic to be able to fix them into place reasonably. That’s in addition to the model making incorrect assumptions, which seem big enough that by the time you could differentiate, you’ll do it wrong.
My basic conclusion is that SEIR is not a useful model going forward. It can’t do anything we can’t do better by using more basic methods.
I find that to be true of complicated models in general. Once they get very close to accurate, they start to be great. But until that point, you’re going to get answers that are mostly not useful compared to using simpler heuristics that are more robust. The complex model is set up to profit from solving the complex problem to get at hidden stuff, but to do that you’d better get almost everything right or learn how to sidestep it.
Papers, Please
New study with claims of meaningful mutation in Covid-19
Claim is that European strain, which is also in New York (and I’m going to presume New Jersey and other surrounding areas as well, because physics), is deadlier than the strain elsewhere, generating orders of magnitude more viral load.
I doubt this effect is dramatic, but it is certainly possible. Can we check for plausibility?
One sanity check is that the CFR (case fatality rate) is 5.6% for NY/NJ vs. 4.3% for USA ex-NY/NJ (as of 4⁄20). New York is under-testing by more than elsewhere because its positive rate is much higher. New York also reached its peak substantially earlier. New York also had overwhelmed hospitals more than elsewhere, and denying testing to less symptomatic people more aggressively than elsewhere.
Combining these factors, it seems clear that there isn’t much room for this type of effect unless there is a compensating factor somewhere. New York is younger and arguably healthier but the effect isn’t big enough.
So while a different viral load in vitro is certainly suggestive, I’m going to be doubtful that there’s much impact here, until I see more evidence. It’s still a potential variable that we can move around.
But what if it was true? What would it mean? There are three implications I can think of, if this is accurate. But is it real?
The first implication is that if Europe and New York have a deadlier strain then the true infection counts are lower than we thought in those areas, because the only way to reasonably calculate infection rates is backing them out from the death counts. Or, alternatively viewed, that the rest of the United States has a less deadly strain, and its infection counts are higher. Or both.
New York’s infection rates being lower would be important because the herd immunity effects are definitely substantial in any scenario, and their magnitude is a bound on various other measurements. The ‘good news’ is that I’ve already been using a 1% IFR as my basis for calculations as a conservative guess, so there’s room for the strain to be worse and that guess to not be that wrong. The fact that herd immunity grows much faster than the idealized calculation is still mostly going to carry the day under any plausible hypothesis.
The second implication is that variolation could allow us to infect people with the least deadly strain. We add another potential variable to initial viral load. If the difference is dramatic, this could easily tip the scales. That’s how I discovered the study, as Robin Hanson wanted to know about this potential implication.
The third implication would, unfortunately, would be that it is less likely that initial viral load is that important. That’s the common sense of one strain multiplying much faster, and thus presumably having a much higher initial viral load, yet only having a somewhat higher death rate than the milder strain. A factor of three here simply isn’t plausible. It would break all sorts of calculations and studies that now make sense.
Flashback: Estimating Infections From Confirmed Deaths
This is a site I found relatively early, that uses a simple calculation of 23 days from infection to death, with an IFR of 1% It then allows you to play with how rapidly the virus spreads before and after lock down. I found it useful early in my thinking, and I now do a similar version of the same thing.
Given this uses official deaths, we know we’re going to add about 50% here to adjust for the official undercount.
Then the calculation assumes that under lock down, infections double once every 23 days. Using this, you get a result of 11.4 million Americans being infected on 4⁄21, or just over 3%. This implicitly assumes something like R0 = 1 with a serial interval of five days, and that the calculation starts with about half of people ever infected still infected now.
Of course, that means this calculation stops working once we’re a few days into the lock down, since the percentage of ever infected currently infected starts crashing even if R0 = 1, making the doubling time take longer since we’re doubling something that doesn’t actually mean anything. We’re basing off the wrong number here.
Escape From New York
My broker is the latest person to provide intelligence on what’s going on in Manhattan and elsewhere in NYC. The report is that most of the people in better off areas have left. This matches other observations. One client is one of four left in her building and has exclusive use of the (therefore safe) in-building gym. The broker’s building is about 20% full. And so on.
When you look at the low rates of infection in Manhattan, you need to remember that those rates use a very wrong denominator.
The contrast with poorer areas in other boroughs is stark. Many of the hardest hit places in Queens, Brooklyn and The Bronx everyone is outside and no one is wearing a mask.
The New York City Subway: Menace or Scapegoat?
When I offered a simple model of New York City, I used the two categories of subway riders and non-riders.
I did not do this primarily on the basis of a map (it’s suggestive, but not more than that), or a statistical study.
I did this on the basis of are you freaking kidding me and have you seen the subway?
Subways are effectively enclosed spaces where social distancing is (at least at rush hour, and in some trains pretty much always) impossible and you mix with a random cross-section of people who are also doing the same thing.
Now there is a paper that says that the subways seeded the epidemic. The paper says some things and points out some graphs, but doesn’t give strong evidence of what is happening.
Thus there are some responses to the paper.
In less exciting news, we have a blogger who does a tear-down of the paper. The points made are fair, as far as they go. It’s good to point out that someone’s arguments are not great. The worry is the implied attitude. That attitude and the accompanying title, “The Subway is Probably Not Why New York City is a Disaster Zone” is emblematic of the attitude that until you prove something with p<0.05 with explicit evidence from a proper experiment, knowledge cannot exist.
The other post attempts to make a statistical case that it’s cars, rather than subways, that are responsible. Automobile and subway transit shares are such close mirror images (correlation of −0.88) that when you use both in a correlation, one of them will end up with a near-zero coefficient. It turns out that when you run the correlations, subways stops and local infection rates are negatively correlated, whereas cars and infection rates are positively correlated. There are also a bunch of Asian and European cities with mass transit systems people can actually use, because such nations care about good mass transit, and which don’t have higher rates than surrounding areas.
Which, of course, is all pretty freaking weird. If there’s one thing New York doesn’t have lots of relative to other places it would be cars. Less than half (1.4mm of 3.1mm) of New York households own even one vehicle. In America, 90% of households have at least one, and the mean is 1.88 vehicles. If automobiles “seeded the New York” epidemic then there wouldn’t be one.
Combine that with driving a car being obviously safe, whereas taking mass transit is obviously unsafe.
So we have a correlational argument for an impossible result. Which means that it’s almost certainly not causation. Something correlated with cars within the city of New York is causing problems.
The obvious answer is that subways are a proxy for being in Manhattan, cars are a proxy for being far from Manhattan, and Manhattan’s situation isn’t actually that bad. The other buroughs have it much worse. That’s presumably because Manhattan is full of wealthy people who fled in large numbers and those who are left can more easily work from home. There’s a speculation that it is also that those without cars tend to stay close and see very local people, which means overlapping and thus safer social graphs, which I find an interesting proposal.
Author claims they deleted Manhattan and ran the correlation, and the correlation was still there, although the numbers aren’t provided.
Outside of Manattan, presumably subways remain a proxy for wealth. It costs money to live near the subway. Those who value their time pay up, those who don’t or can’t, find other transportation. So the same basic dynamics could easily be in play either way.
Regardless, I cannot believe that the subways aren’t a major contributor to the epidemic. The rate of infection among transit workers is as high as you’d expect. And also I believe in a physical world and the germ theory of disease. I mean, come on.
I find this a fascinating cautionary tale about correlation.
I also do think one has to downweight somewhat the role of the subway versus my previous model.
If that’s true, what does it imply? What would it mean if the New York subway wasn’t a major danger?
It would mean that the types of risk that you take on the subway are not major risks! And that would be true even before social distancing and other precautions.
Touching surfaces that other people touch? I have some news about how the subway works.
Being less than six feet apart in an enclosed space? Even touching them? I have some news about how the subway works.
So what’s missing from the subway that’s not missing from other stories of how people all get infected?
Talking. People on the subway don’t talk.
And remember that choir practice where everyone distanced and it didn’t matter?
Hmm.
Reopening Gambits
We are seeing partial reopenings and announced pending partial reopenings in a variety of states, including Florida, Kentucky and Georgia. The thinking behind this seems to be something like this, this is not a strawman as far as I can tell.
The official guideline is that you can start reopening if there is a ’14 day decline’ in positive cases. So you have to do something that causes your problem to get less bad, prove it’s less bad, and then you can… stop doing it so it can get bad again?
Many, and perhaps most, people think that if things are improving, that means the worst is over and things will soon be fine. We can begin to reopen and relax.
This, of course, is crazy town.
And yet, it seems to be common on all levels. In New York, Cuomo keeps hammering “we have to stay vigilant” over and over again, because he knows that the public will interpret being “past the peak” as a reason to stop being vigilant, and perhaps cause another peak. Where the incentives are less aligned, things get that much worse.
We then combine that with the principle that whenever the doomsayer is listened to and the necessary actions taken, that person will look wrong, and we have a serious problem. Which the terrible models are helping to compound.
The ‘good news’ is that these experiments will be valuable. The states, as I noted last time, will not simply reopen. They will legally partially reopen, and people will continue to take precautions above and beyond what is required of them. It will be fascinating to see what will actually happen when people are given permission to return to work. How many will do so? What would fully voluntary behavior look like? And of course, would that behavior work to contain the virus?
The criteria should be, of course, evidence that R0 is sufficiently below 1 that you can reopen and keep R0 below 1, or your case count is sufficiently low that you can afford to do ‘the dance’ of The Hammer and the Dance for a while.
We’re also seeing really stupid reopening methods.
If you’re going to do a partial reopening, I am pretty sure tattoo parlors and gyms are really bad choices to single out in a one sentence statement. That happened.
If you’re going to have your beaches reopened, limiting their hours is not going to make them less crowded. Unless you’re prepared to ration tickets, either close, or don’t close. If anything, extending the hours is probably better. Let people distance in time and give them a way to avoid getting stir crazy.
The protests are strange. To what extent are they real versus astroturfed? As usual, hard to tell. Certainly a lot of people want to get back to work and think what we are doing isn’t worth it. Some of them think that because they don’t fully grok what Covid-19 is and what it is capable of doing. Some of them think that because freedom. Others are making a sensible judgment that depression level economic suffering is really, really bad.
I also don’t have a sense of how many people are at these protests, but I’m confident they are going to have high rates of infection. If people keep protesting, they will get exposed enough to infect them, given how they are intentionally not distancing while protesting from what we can see. Could easily prove self-defeating.
Given the slow pace of decline in new cases, at least some amount of a second wave therefore seems all but inevitable in areas unwilling to maintain distancing and that have yet to acquire substantial herd immunity, such as those currently reopening. It won’t have the same velocity as the first one, but it will happen.
Thus, the continued market optimism seems to be very aggressive. I am an optimist about many aspects of the situation, but very worried that the market is missing the same thing as everyone else.
As I see it, there have been two major pieces of information that have moved my estimates in a positive direction, one is about the nature of the virus being somewhat less deadly/hard to suppress than the consensus here said in early March, the other about the nature of the response to it. The first piece of news is some good luck we’ve had with the nature and spread of Covid, but the second of these is much more generally relevant, and even applies to X-risks. You summarised both together, but I’m going to separate them for that reason.
For the first, see e.g. from Marginal Revolution:
What we have learned is that Western-style lockdowns work, in a fairly diverse range of governments with a fairly wide range of competence in approaching this situation, R has been driven to some number noticeably below 1, and not hovering around 1.There were several commentators arguing (perfectly reasonably, based on the one data point we had in mid-March) that only Wuhan-style lockdowns with case isolation, soldiers welding people into their homes and thousands of door-to-door contract tracers were sufficient to turn R substantially below 1. Luckily, we do not live in that world, and for that we should be grateful. This may connect to the other weird finding—that in many places (basically everywhere except NY, Italy, Spain), hospitals haven’t been overwhelmed the way they ‘should have’ based on our best knowledge in early March.
London hasn’t exceeded its hospital or ICU capacity (though its capacity was recently doubled with the new Nightingale hospital, it has hardly been used), and it has about half of the infection rate of New York—with one estimate here being that the infection rate was 7% in London on the 2nd of April—those cases should have long since progressed and made their way to the hospitals, yet the Nightingale still stands mostly empty (for the love of all eternity, I hope the NHS doesn’t shut the place down because ‘it wasn’t needed’). Probably, London could take New York infection rates and that would just about hit its new capacity.
That fits with anecdotal evidence—I know a nurse who works in a big local hospital, and she told me that although things got pretty dicey over Easter time, with PPE being reused, operating theatres etc. being converted to beds and ICUs, and everyone working incredibly long shifts, nobody was being turned away from the ICUs like in Italy or Spain. The hospitals in London have started to empty out in the last week, as they’re ahead of the rest of the UK, and still it never reached the new capacity with a 7% infection rate before the peak (that came on April 8th, not April 2nd).
Perhaps uncertainty in the Infection hospitalization rate is a better candidate than uncertainty in the IFR—has anyone checked out that 20% figure recently?
The second piece of good news is what I’d call the ‘Morituri Nolumus Mori effect’ - western governments and individuals repeatedly deciding to do the unthinkable in the face of the overwhelming threat, usually reactively, usually at the very last minute, has become a reliable pattern—one we can hopefully rely on in the future as the minimum standard for response. This isn’t mood affiliation, it has become a reasonably consistent pattern, at least in Europe and through large parts of the US.
E.g. the UK government said lockdowns were infeasible, and then implemented one a couple of weeks later—or the fact that compliance with the lockdown measures in Europe has been generally high and is showing no signs of slackening off—or most of the simple measures that will probably happen anyway, that were mentioned in on R0 - or that concerns about contact tracing privacy and stimulus spending have evaporated in most treasuries. Combined with the key empirical fact about the virus—that reactively turning a western-style lockdown on is enough to suppress the virus in most places, we can see the shape of the next year.
We (Europe, some USA) will try to implement a test-trace-isolate scheme to suppress the virus long term without any more lockdowns. That’s the plan, but it will be incredibly difficult to implement. It might work (I’d give Europe a 50⁄50 shot, the USA as a whole somewhat less), but we will simultaneously be ‘reopening’ with heavy social distancing, both legally enforced and through individual behaviour.
If there is another acceleration, it will be slower, we will have more time to react, and we will do another lockdown in the worst case when numbers are about as bad as what prompted the first lockdown, and that lockdown will do what it did this time around except with better treatment, bigger hospital capacity (and a smaller economy).
This rough schema was suggested as what would likely happen in early March, but then it was possible to claim governments and individuals were literally asleep at the wheel and voluntarily choosing to die—that now looks much less likely (leaving aside Trump). I recall several posters responding to that claim with ‘we’re too incompotent to even make a lockdown work’ or similar—that didn’t seem impossible given what we knew then.
It’s interesting to see how speculations in March about whether the Morituri Nolumus Mori effect would pull through or not. Wei Dai notably updated in the direction of thinking it would, after reading this prescient blogpost:
Toby Ord summed up the Morituri Nolumus Mori effect in a recent interview:
It is possible to reconcile this hopeful picture of our response, and this model of how we will deal with things going forward, with the very, very obvious civilisational inadequacy we have when addressing Covid—the lesson seems to be that reactivity is strong (when it’s legally allowed to be) even if availability bias, ignoring expected value and the rest make advance planning incredibly weak.
Of these two very general lessons, the first (massive inadequacy in preparation) is clearly known to all of us. But the second (reactive actions being strong) clearly came as a surprise to some people (I put significant weight on a total lockdown failure and explosive first wave, which isn’t happening) - partly because of the good luck that we’re in the timeline where R goes under 1 with an imperfect lockdown, but also because we didn’t attribute enough weight to the Morituri Nolumus Mori effect. I see it as the counterbalance to the meme of ‘civilisational inadequacy’ - both are good heuristics for predicting what is and isn’t going to be done by governments and individuals, both are important if you want to get your predictions right.
I’ve tried to tease out those general lessons because I think they can apply to any other rare, catastrophic risk, including X-risks. I note that Will Macaskill made reference to the Morituri Nolumus Mori effect in a pre-covid interview, explaining why he puts the probability of X-risks relatively low. Gates also referred to it.
I’m not going to update on future evidence I haven’t seen, but if Covid doesn’t kill the 5% of people it ‘should’ kill based on our best knowledge in early march, the Morituri Nolumus Mori effect’s unexpected strength will be one reason, and that should lead us to downgrade our estimate of the chance of genuine X-risks. The first thing I said, the day I became confident this was all really happening (late feb) pointed to that connection.
Notably, the Tyler Cowen of a week ago thinks ‘social distancing is working well’ is not a cause for optimism: https://marginalrevolution.com/marginalrevolution/2020/04/social-distancing-is-working-so-well.html
I think this doesn’t properly consider the counterfactual where it didn’t work—the fact that it did work does imply we’re more the type of people who can make other interventions work as well, and lo and behold it turns out there are Lockdown alternatives that we have more reason to belive will work, given that Lockdown is working.
I also think that the MNM effect is the main reason why both metaculus and superforecasters consistently predict deaths will stay in the low millions. https://goodjudgment.io/covid/dashboard/. They have both heuristics, the MNM effect and inadequacy, and knew from the start.
The New York numbers are certainly interesting. I wonder if New York reached a no-longer-susceptible rate that means that it can no longer support long transmission chains in lockdown...
I am VERY VERY skeptical of this paper.
The mutations they show are TINY. And their methods section does not include any details at ALL of how they grew their viruses, or how they diluted them to a multiplicity of infection of 0.5. Short version, they were supposed to grow cultures of each of their 11 viral isolates, measure how infective each culture was, and then dilute them so there were 50% as many infective viruses as cells in the flasks they poured them into to make sure they all started from the same point. But they give no details of how they measured this and did the dilution.
When growing viruses, batch effects are a BITCH. Your virus culture might be a little differently diluted, they might be a different average age, or your culture might’ve been produced in a corner of the incubator that was a little warmer. The lines of each of the viruses bounce around over time, some higher and lower over time, and all I really see is a cloud of viral RNA levels that goes forward over time with some high and some low at any given time. They have four replicates of each virus and DO show that the replicates behave exactly like each other, but their methods section doesn’t say if they took the same diluted virus stock and put it into four culture flasks, or made four separate stocks of each virus. It is QUITE QUITE possible that they are just seeing differences in the stocks they grew that have to do with their culture histories and dilution details, rather than genetic differences.
On another point, the cell line they grow them in isn’t even human and doesn’t really have an innate immune response, which would be by far the most important things regarding real-world infection. Their high viral replication lines were also not from their more severe cases.
They definitely did find ONE interesting thing—one of their isolates appears to have independently invented a particular missense mutation seen in another strain elsewhere in the pile of global sequences. Actually suggestive of selection. It’s not in the receptor binding domain of the S protein though, so it shouldn’t affect binding or immunity much.
Other papers have found MUCH more interesting mutations—there is a cluster of cases in Singapore that have up and completely lost one of the accessory proteins of the virus, which is involved in squashing the human innate immune response, and another case in Arizona has broken another such accessory protein beyond repair. This is presumably because this bugger evolved in bats, and their interferon response is on a freaking hair trigger. The anti-immune-system measures of the virus are overclocked relative to what you need to replicate in humans, and losing some of them doesn’t really hurt them.
I’m still very much confused by what is going on in the US and elsewhere. Here is my basic estimate:
The US had 2000 deaths on April 7. if 1% people die 3 weeks after being infected, it means that around March 17 there were 200,000 new infections, almost 10 times the official count. The confirmed cases then increased at least 10 times two weeks later, around April 1. It should imply that the likely infection count was also up ten times, to maybe 2,000,000 cases a day. Which would mean 20,000 deaths 3 weeks after that, so right about now. Yet the official numbers are just over 2,000 deaths a day.
So, I do not understand what is going on. And I don’t know which input numbers are off. Or which assumptions.
After the stay-at-home orders started (~22 March) we no longer expect to see exponential growth in actual infections so the delay between infections and cases identified causes there to be a varying ratio between them.
Add that to the fact that the testing rate was the main thing controlling how many cases were identified which messes everything up. In late March/early April the positive rate of tests in New York was ~50% which renders the numbers fairly meaningless.
I’ve updated modestly against surface transmission or fully (or even partially) aerosolized transmission because of this (and other things). I am still very reluctant to go to my nearby very busy grocery store (in Brooklyn).