This blog post argues that the now popular idea of “flattening the curve”, in the sense that most people get exposed but slowly enough to not overwhelm the health care system, is not feasible. The result is that we’ll either achieve containment or at least widespread regional health care system collapse (and maybe Wei Dai’s global health care collapse outcome). I haven’t spent much time modeling this yet, but tentatively it looks like flattening the curve requires very precise fine-tuning of R0 to stay on a path very close to 1 for at least several months, which seems impossible to pull off.
It feels to me now that flattening the curve is just a nice graphic without anyone checking the math, but I am confused that many informed-seeming experts are promoting the idea. Anything I’m missing?
ETA: I made an epidemic + hospitalization model (Google Sheets), it sure looks like the usual flatten-the-curve chart is a comforting fiction. Peak hospital bed demand in the uncontrolled epidemic scenario is usually drawn at 2-3x hospital capacity. I’m getting 25x and the chart looks a lot less reassuring. My shakiest assumptions are hospitalization / intensive care rates, any feedback there would be very helpful.
Disclaimer: I don’t know if this is right, I’m reasoning entirely from first principles.
If there is dispersion in R0, then there would likely be some places where the virus survives even if you take draconian measures. If you later relax those draconian measures, it will begin spreading in the larger population again at the same rate as before.
In particular, if the number of cases is currently decreasing overall most places, then soon most of the cases will be in regions or communities where containment was less successful and so the number of cases will stop decreasing.
If it’s infeasible to literally stamp it out everywhere (which I’ve heard), then you basically want to either delay long enough to have a vaccine or have people get sick at the largest rate that the health care system can handle.
If it’s infeasible to literally stamp it out everywhere (which I’ve heard), then you basically want to either delay long enough to have a vaccine
South Korea, Singapore, Italy
or have people get sick at the largest rate that the healthcare system can handle.
The UK.
We’re running an interesting experiment to see which approach works. One potential benefit is that the world will be able to observe which of the two strategies is viable and switch between them, at least theoretically. Practically, switching from ‘suppress/contain’ to ‘flatten curve’ seems a lot more feasible than the alternative of trying to suppress after not taking tough measures, as the UK will have to do if its strategy means cases grow out of control. South Korea could still try to use curve-flattening as a backup plan.
However, for the reason given in the blog post, suppression will be a viable backup even if switching from curve-flattening to suppression is intrinsically harder than the other way round.
The interventions of enforced social distancing and contract tracing are expensive and inevitably entail a curtailment of personal freedom. However, they are achievable by any sufficiently motivated population. An increase in transmission *will* eventually lead to containment measures being ramped up, because every modern population will take draconian measures rather than allowing a health care meltdown. In this sense COVID-19 infections are not and will probably never be a full-fledged pandemic, with unrestricted infection throughout the world. It is unlikely to be allowed to ever get to high numbers again in China for example. It will always instead be a series of local epidemics.
Still seems to me like you should be able to isolate those problem areas from the rest of the country. Then even if you can’t contain the epidemic inside, you spare most of the country (for the moment). But I think we mostly agree. A scenario that seems increasingly likely to me is that governments will intervene in increasingly strict ways until we get very close to true containment (before ~15% of the world is infected), and then will loosen movement restrictions in more-contained areas while playing whack-a-mole with a sequence of localized outbreaks for 1-2 years until a vaccine is ready.
soon most of the cases will be in regions or communities where containment was less successful and so the number of cases will stop decreasing. If it’s infeasible to literally stamp it out everywhere (which I’ve heard),
Borders, travel restrictions, cancellation of large events, contact tracing and testing will solve this.
Borders are necessary precisely because of this dispersion issue.
Note that this simulator appears to be someone’s class project. However, its behavior seems to track more or less with what I’d expect. But I’d love for someone with more experience to reproduce this relatively simple model and check it.
I have limited confidence that I’ve understood it correctly, so take this for what it’s worth. It looks to me the time step used in this simulator is one day. So the gamma parameter (rate of recovery per unit time) should be (Wikpedia says) 1/D where D is the duration of the disease. (For transmission modeling purposes, this should be the infectious duration, not the duration of symptoms.) I chose gamma=0.7, meaning D ~= 14 days, semi-arbitrarily, based on https://www.medrxiv.org/content/10.1101/2020.03.05.20030502v1 (which says 10 days after start of symptoms) and the general figure of 14-day quarantines.
The beta parameter is the transition rate from “susceptible” to “infected” per person infected per unit time. (That is, betaI is the transition rate overall.) I think therefore R = Dbeta (the total number of new infections per person should equal the duration times the number of infections per unit time), so beta = R/D = R*gamma.
All that being said, given those assumptions, here are what I think the plots look like for various R values. (Note that the names of the parameters given in the URL do not appear to match the names in the UI. I think the URL parameter names are just wrong; the model behaves as I would expect it to. It’s a very simple model and I’d love for someone to independently check this.)
So it looks to me like very substantial curve-flattening ought to be possible, based on this simplified model, at quite realistic R values. Whether it’s possible to flatten it enough to prevent health system overload is anybody’s guess—likely not everywhere—but it looks like there are substantial benefits possible.
Thanks for pointing me in this direction. I think the key worry highlighted in the post is that the health care system gets overwhelmed with even just a few percent of the population being infected. So even if we can bring peak infections down by a factor of 2-4 by slowing transmission, the health care system is still going to be creamed at the peak.
I’ve now built a discrete-time, Bay Area version of the SIR model (+ hospitalization) in this Google sheet. I assume 20% of infections need hospitalization, of which 20% need intensive care, and use raw bed-to-population ratios (non-COVID utilization vs stretching capacity should roughly cancel out). Hospital bed availability at peak infections is 4% (25x over capacity) in the uncontrolled beta=0.25 scenario and only improves to 10% (10x over capacity) in the “controlled” beta=0.14 scenario. Even if my hospitalization/ICU numbers are too high by a factor of 5 the “controlled” scenario still looks pretty terrible. Any feedback on the model assumptions would be super useful.
I haven’t checked your models quantitatively, but qualitatively I absolutely believe you that the options here are “bad” and “really really bad”, and that neither one of them gets us down to where we need to be.
The difference between 4% and 10% could still save a lot of lives; at that level it may be close to 1:1 (every bed freed up is a life saved), since only the most critical cases will be getting beds at that point.
But you’re right that this is clearly not adequate, and the graphic showing the flatter curve as peaking under the capacity line is pretty misleading. (There are versions of the graphic which don’t, but they appear to have been memetically outcompeted by those that do.)
I think it’s still true that “flattening the curve” will save lives, potentially a lot of lives, so even if the graphic might be a bit misleading as to the possibility of flattening it below the critical threshold, I think it’s still a reasonable meme to promote.
But really the ultimate goal has to be reducing R below 1, which will arguably flatten the curve, just not quite in the way the meme seems to be trying to get at. I don’t want to steer too close to dark side epistemology here, but if the meme gets people to stay inside, cancel their parties, and wash their fucking hands… it’s hard for me to be too against it, and I think it’s probably true enough?
I don’t know how other people react. I took the epidemic fairly seriously but my initial reaction to the meme was one of reassurance/complacency—OK so I can’t avoid eventual exposure anymore, but at least things will proceed in a somewhat orderly fashion if we cancel big events, wash hands, stop touching our face, etc. I feel like this is the sort of attitude that contributes to, and allows the public to accept, decisions like the capitulation in Sacramento. The mental image of mitigation is “basically trying to mitigate the risk to those who are most at risk: the elderly and those with chronic underlying conditions”. The reality is that we’ll be forced to let all the old and sick die in hospital parking lots.
It seems to me fairly likely that the public will ultimately accept the Hubei-style lockdowns that will result in containment, but this meme probably is responsible for delaying that moment by at least a few days :(
I saw the meme as mostly targeting people who were currently even more complacent “eh, there’s nothing we can do, so fuck it”, and getting them to instead go “okay, there’s stuff that’s actually worth doing.”
Hospital bed availability at peak infections is 4% (25x over capacity) in the uncontrolled beta=0.25 scenario and only improves to 10% (10x over capacity) in the “controlled” beta=0.14 scenario.
Alex, I’m looking at your spreadsheet and I don’t understand where you got these bold numbers from. It looks like you tweaked your sheet a bit since writing this comment, but still I can’t figure out what you are looking at when you say 25x and 10x over capacity. Could you explain?
Yeah I got better hospitalization/ICU rates from Bucky and upped beta to 0.3 in uncontrolled scenario to make a point on Twitter. Hospital/ICU bed availability % is graphed in each scenario tab, by overcapacity I mean the inverse of availability. Alternatively take ratio of peak to line in the Charts tab. Looks like ~15x and 5x now for hospital beds.
That’s a really interesting blog post, and it made me update (towards the idea that containment efforts in most countries will keep ramping up until containment actually succeeds). How did you come across it? I’ve been following Twitter, a couple of FB groups, and Reddit, and it didn’t get linked by any of the posts I saw.
It feels to me now that flattening the curve is just a nice graphic without anyone checking the math, but I am confused that many informed-seeming experts are promoting the idea. Anything I’m missing?
I think each little bit of curve flattening makes things a little less bad (since a smaller number of cases are beyond capacity, and a little more time is created to prepare), but the graphs tend to draw the “capacity” line unrealistically high. This graph is more realistic than many since the flattened curve still peaks above the capacity line, but it still paints too rosy a picture.
For hospitalisation / intensive care, the original data from China had 14% “severe” and 5%”critical” cases. These are percentages of diagnosed cases so you would need to modify these with the diagnosis rate.
For the Diamond Princess about 50% of cases were asymptomatic so that is likely an upper limit on diagnosis rate. Ascertainment rates from these papers are highly variable so an actual number here is hard to estimate.
That suggests hospitalisation is probably no more than 10% and intensive care no more than 2.5%. These numbers are a bit lower than your model but not enough to get us out of the woods.
If most people who need it do not have access to ventilators, which is inevitable if even a percent of the population are infected at any one time, then it on the order of 4% of infected individuals will die.
I have heard ‘5-15%’ and ’20%′ and ‘12%’ for hospitalization/‘no-treatment fatality’ rates, with a trend that the newer estimates tend to be lower. The initial figure from China was a blood-curdling 20%, as you said, while a current projection based on evidence from real overwhelmed healthcare systems is a merely very bad 3-5%. This is lower by a larger factor than most of the reductions to the CFR that account for undocumented cases—perhaps indicating there are more undocumented cases than those corrections imply?
Also, of relevance to the UK’s strategy (cocooning older people from infection), how does this breakdown by age? This poster has estimated that young, male, no pre-existing condition have 1/4th the risk of hospitalization (assuming a 50⁄50 chance that the intersection of age-30/no-pre-existing condition has a much lower risk than either alone) - which means if older and vulnerable people can be ‘cocooned’, the actual rate of hospitalization can be slashed again by a factor of 4 to something bearable, around 1%, if you take 4% as the baseline.
(note that the corrections in this paper for delay to death and underreporting skew the death rates even more strongly towards older patients, with the fatality rate among 20-29 barely changing after adjustment but the fatality rates among 60+ doubling).
That means you could surf a wave of a few hundred thousand people having the virus at a time and still provide adequate ICU space. With some expansion in capacity, that could be even higher.
I’m wondering why you are also coming up with a LOT more hospitalization than even cases reported in China.
In early April, if I’m readying this right, you are expecting the Bay area to need over 80,000 hospital beds for COVID-19 for the uncontrolled case (I assume that is merely a comparison scenario) and then after 3 months, say starting July, in the controlled scenario about 81,000 hospital beds will be needed. Then things keep going up.
That seems like something is missing there. Why would the Bay area really expect to see such drastically higher impact than China as a whole? Using your 20%, 20% assumption and saying China is at 85,000 now, the total demand for hospital beds would have been 20,400 over the entire December—March time period.
China locked down Wuhan at ~500 confirmed cases and many other Hubei cities the next day, which immediately lowered transmission (see Chart 7 here) to R0 below 1. This is very far from the uncontrolled scenario and still overloaded the health care system. This is much of the point of the post I linked—the degree of hospital overload in an uncontrolled scenario is so high that even huge reductions in transmission don’t realistically avoid overload if R0 stays above 1.
I do get that point, and do think it is one that is well made. At the same time, I find the numbers produced a bit on the high side. Clearly the 20,400 number being within existing capacity for the Bay area completely ignores current patients unrelated to COVID-19. But perhaps under a regime of social distancing, containment and isolation of both known cases and by the more concerned both the speed of growth and the total number your model is producing would be much closer to manageable.
I think it might also be worth considering hospital beds—to some extent—is not a fixed quantity to can expand as demand increases. Consider using hotels or other (these days rather vacant) building/structures. That’s basically what China has done here (and in other cases with their “legos” 10 day to build hospitals—rejected the concept of what a hospital is and how fixed the supply is.
Just as an assumption check, was your hospital bed/ICU bed value an average for, say the USA, or some other country level metric or an average of the local hospital to service area metric?
I used overall US numbers. I didn’t consider capacity expansion but also didn’t take out already-occupied beds, as I think both are roughly on the order of 2-5x in opposite directions. The only Bay Area-specific numbers are population and day 0 infected (I assumed ~10x confirmed cases).
I should have made it clearer I don’t deny we can literally flatten the curve, but rather the idea that
most people get exposed but slowly enough to not overwhelm the health care system.
Unclear to me how well St Louis did on the health care system front. Also, the pairing of Philadelphia and St Louis is a bit convenient if you consider the raw scatterplot (panel C bottom left—ETA Philadelphia is the dot closest to Pittsburgh per this table).
This blog post argues that the now popular idea of “flattening the curve”, in the sense that most people get exposed but slowly enough to not overwhelm the health care system, is not feasible. The result is that we’ll either achieve containment or at least widespread regional health care system collapse (and maybe Wei Dai’s global health care collapse outcome). I haven’t spent much time modeling this yet, but tentatively it looks like flattening the curve requires very precise fine-tuning of R0 to stay on a path very close to 1 for at least several months, which seems impossible to pull off.
It feels to me now that flattening the curve is just a nice graphic without anyone checking the math, but I am confused that many informed-seeming experts are promoting the idea. Anything I’m missing?
ETA: I made an epidemic + hospitalization model (Google Sheets), it sure looks like the usual flatten-the-curve chart is a comforting fiction. Peak hospital bed demand in the uncontrolled epidemic scenario is usually drawn at 2-3x hospital capacity. I’m getting 25x and the chart looks a lot less reassuring. My shakiest assumptions are hospitalization / intensive care rates, any feedback there would be very helpful.
Disclaimer: I don’t know if this is right, I’m reasoning entirely from first principles.
If there is dispersion in R0, then there would likely be some places where the virus survives even if you take draconian measures. If you later relax those draconian measures, it will begin spreading in the larger population again at the same rate as before.
In particular, if the number of cases is currently decreasing overall most places, then soon most of the cases will be in regions or communities where containment was less successful and so the number of cases will stop decreasing.
If it’s infeasible to literally stamp it out everywhere (which I’ve heard), then you basically want to either delay long enough to have a vaccine or have people get sick at the largest rate that the health care system can handle.
South Korea, Singapore, Italy
The UK.
We’re running an interesting experiment to see which approach works. One potential benefit is that the world will be able to observe which of the two strategies is viable and switch between them, at least theoretically. Practically, switching from ‘suppress/contain’ to ‘flatten curve’ seems a lot more feasible than the alternative of trying to suppress after not taking tough measures, as the UK will have to do if its strategy means cases grow out of control. South Korea could still try to use curve-flattening as a backup plan.
However, for the reason given in the blog post, suppression will be a viable backup even if switching from curve-flattening to suppression is intrinsically harder than the other way round.
Still seems to me like you should be able to isolate those problem areas from the rest of the country. Then even if you can’t contain the epidemic inside, you spare most of the country (for the moment). But I think we mostly agree. A scenario that seems increasingly likely to me is that governments will intervene in increasingly strict ways until we get very close to true containment (before ~15% of the world is infected), and then will loosen movement restrictions in more-contained areas while playing whack-a-mole with a sequence of localized outbreaks for 1-2 years until a vaccine is ready.
Borders, travel restrictions, cancellation of large events, contact tracing and testing will solve this.
Borders are necessary precisely because of this dispersion issue.
That’s an interesting question that seems like it ought to be able to be checked numerically.
I made an attempt using this simulator of the fairly-naive “SIR” model of disease transmission:
http://www.public.asu.edu/~hnesse/classes/sir.html?Alpha=0.3&Beta=0.07&initialS=1000&initialI=100&initialR=0&iters=50
Note that this simulator appears to be someone’s class project. However, its behavior seems to track more or less with what I’d expect. But I’d love for someone with more experience to reproduce this relatively simple model and check it.
You can read about the model at https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SIR_model .
I have limited confidence that I’ve understood it correctly, so take this for what it’s worth. It looks to me the time step used in this simulator is one day. So the gamma parameter (rate of recovery per unit time) should be (Wikpedia says) 1/D where D is the duration of the disease. (For transmission modeling purposes, this should be the infectious duration, not the duration of symptoms.) I chose gamma=0.7, meaning D ~= 14 days, semi-arbitrarily, based on https://www.medrxiv.org/content/10.1101/2020.03.05.20030502v1 (which says 10 days after start of symptoms) and the general figure of 14-day quarantines.
The beta parameter is the transition rate from “susceptible” to “infected” per person infected per unit time. (That is, betaI is the transition rate overall.) I think therefore R = Dbeta (the total number of new infections per person should equal the duration times the number of infections per unit time), so beta = R/D = R*gamma.
All that being said, given those assumptions, here are what I think the plots look like for various R values. (Note that the names of the parameters given in the URL do not appear to match the names in the UI. I think the URL parameter names are just wrong; the model behaves as I would expect it to. It’s a very simple model and I’d love for someone to independently check this.)
R=4.82 (beta=0.34) (upper cited estimate from Wikipedia): http://www.public.asu.edu/~hnesse/classes/sir.html?Alpha=0.344&Beta=0.07&initialS=1000&initialI=100&initialR=0&iters=50
R=3.5 (beta=.25): http://www.public.asu.edu/~hnesse/classes/sir.html?Alpha=0.25&Beta=0.07&initialS=1000&initialI=100&initialR=0&iters=50
R=2.28 (beta=.16) (estimate based on the Diamond Princess data, https://www.ncbi.nlm.nih.gov/pubmed/32097725): http://www.public.asu.edu/~hnesse/classes/sir.html?Alpha=0.16&Beta=0.07&initialS=1000&initialI=100&initialR=0&iters=50
R=2 (beta=.14): http://www.public.asu.edu/~hnesse/classes/sir.html?Alpha=0.14&Beta=0.07&initialS=1000&initialI=100&initialR=0&iters=50
So it looks to me like very substantial curve-flattening ought to be possible, based on this simplified model, at quite realistic R values. Whether it’s possible to flatten it enough to prevent health system overload is anybody’s guess—likely not everywhere—but it looks like there are substantial benefits possible.
Thanks for pointing me in this direction. I think the key worry highlighted in the post is that the health care system gets overwhelmed with even just a few percent of the population being infected. So even if we can bring peak infections down by a factor of 2-4 by slowing transmission, the health care system is still going to be creamed at the peak.
I’ve now built a discrete-time, Bay Area version of the SIR model (+ hospitalization) in this Google sheet. I assume 20% of infections need hospitalization, of which 20% need intensive care, and use raw bed-to-population ratios (non-COVID utilization vs stretching capacity should roughly cancel out). Hospital bed availability at peak infections is 4% (25x over capacity) in the uncontrolled beta=0.25 scenario and only improves to 10% (10x over capacity) in the “controlled” beta=0.14 scenario. Even if my hospitalization/ICU numbers are too high by a factor of 5 the “controlled” scenario still looks pretty terrible. Any feedback on the model assumptions would be super useful.
I haven’t checked your models quantitatively, but qualitatively I absolutely believe you that the options here are “bad” and “really really bad”, and that neither one of them gets us down to where we need to be.
The difference between 4% and 10% could still save a lot of lives; at that level it may be close to 1:1 (every bed freed up is a life saved), since only the most critical cases will be getting beds at that point.
But you’re right that this is clearly not adequate, and the graphic showing the flatter curve as peaking under the capacity line is pretty misleading. (There are versions of the graphic which don’t, but they appear to have been memetically outcompeted by those that do.)
I think it’s still true that “flattening the curve” will save lives, potentially a lot of lives, so even if the graphic might be a bit misleading as to the possibility of flattening it below the critical threshold, I think it’s still a reasonable meme to promote.
But really the ultimate goal has to be reducing R below 1, which will arguably flatten the curve, just not quite in the way the meme seems to be trying to get at. I don’t want to steer too close to dark side epistemology here, but if the meme gets people to stay inside, cancel their parties, and wash their fucking hands… it’s hard for me to be too against it, and I think it’s probably true enough?
I don’t know how other people react. I took the epidemic fairly seriously but my initial reaction to the meme was one of reassurance/complacency—OK so I can’t avoid eventual exposure anymore, but at least things will proceed in a somewhat orderly fashion if we cancel big events, wash hands, stop touching our face, etc. I feel like this is the sort of attitude that contributes to, and allows the public to accept, decisions like the capitulation in Sacramento. The mental image of mitigation is “basically trying to mitigate the risk to those who are most at risk: the elderly and those with chronic underlying conditions”. The reality is that we’ll be forced to let all the old and sick die in hospital parking lots.
It seems to me fairly likely that the public will ultimately accept the Hubei-style lockdowns that will result in containment, but this meme probably is responsible for delaying that moment by at least a few days :(
I saw the meme as mostly targeting people who were currently even more complacent “eh, there’s nothing we can do, so fuck it”, and getting them to instead go “okay, there’s stuff that’s actually worth doing.”
You’re probably right.
Alex, I’m looking at your spreadsheet and I don’t understand where you got these bold numbers from. It looks like you tweaked your sheet a bit since writing this comment, but still I can’t figure out what you are looking at when you say 25x and 10x over capacity. Could you explain?
Yeah I got better hospitalization/ICU rates from Bucky and upped beta to 0.3 in uncontrolled scenario to make a point on Twitter. Hospital/ICU bed availability % is graphed in each scenario tab, by overcapacity I mean the inverse of availability. Alternatively take ratio of peak to line in the Charts tab. Looks like ~15x and 5x now for hospital beds.
That’s a really interesting blog post, and it made me update (towards the idea that containment efforts in most countries will keep ramping up until containment actually succeeds). How did you come across it? I’ve been following Twitter, a couple of FB groups, and Reddit, and it didn’t get linked by any of the posts I saw.
I’m wondering this too.
Don’t recall how I ended up seeing it, but it was through this tweet by the author: https://twitter.com/DanielFalush/status/1236918870780198912 (ETA: Razib Khan RT’d him)
Perhaps the numbers work out better when you include cocooning of populations that disproportionately make use of hospital resources
I think each little bit of curve flattening makes things a little less bad (since a smaller number of cases are beyond capacity, and a little more time is created to prepare), but the graphs tend to draw the “capacity” line unrealistically high. This graph is more realistic than many since the flattened curve still peaks above the capacity line, but it still paints too rosy a picture.
Nice model.
For hospitalisation / intensive care, the original data from China had 14% “severe” and 5%”critical” cases. These are percentages of diagnosed cases so you would need to modify these with the diagnosis rate.
For the Diamond Princess about 50% of cases were asymptomatic so that is likely an upper limit on diagnosis rate. Ascertainment rates from these papers are highly variable so an actual number here is hard to estimate.
That suggests hospitalisation is probably no more than 10% and intensive care no more than 2.5%. These numbers are a bit lower than your model but not enough to get us out of the woods.
From the blog post:
I have heard ‘5-15%’ and ’20%′ and ‘12%’ for hospitalization/‘no-treatment fatality’ rates, with a trend that the newer estimates tend to be lower. The initial figure from China was a blood-curdling 20%, as you said, while a current projection based on evidence from real overwhelmed healthcare systems is a merely very bad 3-5%. This is lower by a larger factor than most of the reductions to the CFR that account for undocumented cases—perhaps indicating there are more undocumented cases than those corrections imply?
Also, of relevance to the UK’s strategy (cocooning older people from infection), how does this breakdown by age? This poster has estimated that young, male, no pre-existing condition have 1/4th the risk of hospitalization (assuming a 50⁄50 chance that the intersection of age-30/no-pre-existing condition has a much lower risk than either alone) - which means if older and vulnerable people can be ‘cocooned’, the actual rate of hospitalization can be slashed again by a factor of 4 to something bearable, around 1%, if you take 4% as the baseline.
(note that the corrections in this paper for delay to death and underreporting skew the death rates even more strongly towards older patients, with the fatality rate among 20-29 barely changing after adjustment but the fatality rates among 60+ doubling).
That means you could surf a wave of a few hundred thousand people having the virus at a time and still provide adequate ICU space. With some expansion in capacity, that could be even higher.
Thanks for digging these up! I updated the model. Still terrible.
I’m wondering why you are also coming up with a LOT more hospitalization than even cases reported in China.
In early April, if I’m readying this right, you are expecting the Bay area to need over 80,000 hospital beds for COVID-19 for the uncontrolled case (I assume that is merely a comparison scenario) and then after 3 months, say starting July, in the controlled scenario about 81,000 hospital beds will be needed. Then things keep going up.
That seems like something is missing there. Why would the Bay area really expect to see such drastically higher impact than China as a whole? Using your 20%, 20% assumption and saying China is at 85,000 now, the total demand for hospital beds would have been 20,400 over the entire December—March time period.
China locked down Wuhan at ~500 confirmed cases and many other Hubei cities the next day, which immediately lowered transmission (see Chart 7 here) to R0 below 1. This is very far from the uncontrolled scenario and still overloaded the health care system. This is much of the point of the post I linked—the degree of hospital overload in an uncontrolled scenario is so high that even huge reductions in transmission don’t realistically avoid overload if R0 stays above 1.
I do get that point, and do think it is one that is well made. At the same time, I find the numbers produced a bit on the high side. Clearly the 20,400 number being within existing capacity for the Bay area completely ignores current patients unrelated to COVID-19. But perhaps under a regime of social distancing, containment and isolation of both known cases and by the more concerned both the speed of growth and the total number your model is producing would be much closer to manageable.
I think it might also be worth considering hospital beds—to some extent—is not a fixed quantity to can expand as demand increases. Consider using hotels or other (these days rather vacant) building/structures. That’s basically what China has done here (and in other cases with their “legos” 10 day to build hospitals—rejected the concept of what a hospital is and how fixed the supply is.
Just as an assumption check, was your hospital bed/ICU bed value an average for, say the USA, or some other country level metric or an average of the local hospital to service area metric?
I used overall US numbers. I didn’t consider capacity expansion but also didn’t take out already-occupied beds, as I think both are roughly on the order of 2-5x in opposite directions. The only Bay Area-specific numbers are population and day 0 infected (I assumed ~10x confirmed cases).
It worked in 1918: https://qz.com/1816060/a-chart-of-the-1918-spanish-flu-shows-why-social-distancing-works/
I should have made it clearer I don’t deny we can literally flatten the curve, but rather the idea that
Unclear to me how well St Louis did on the health care system front. Also, the pairing of Philadelphia and St Louis is a bit convenient if you consider the raw scatterplot (panel C bottom left—ETA Philadelphia is the dot closest to Pittsburgh per this table).