Imagine you have a goal of identifying a novel disease by the time some
small fraction of the population has been infected. Many of the signs
you might use to detect something unusual, however, such as doctor
visits or shedding into wastewater, will depend on the number of
people currently infected. How do these relate?
Bottom line: if we limit our consideration to the time before anyone has noticed
something unusual, where people aren’t changing their behavior to
avoid the disease, the vast majority of people are still
susceptible, and spread is likely approximately exponential, then:
incidence=cumulative infectionsln(2)doubling time
Let’s derive this! We’ll call “cumulative infections” c(t), and “doubling
time” Td. So here’s cumulative infections at time
t:
c(t)=2tTd
The math will be easier with natural exponents, so let’s define
k=ln(2)Td
and switch our base:
ekt
Let’s call “incidence” i(t),
which will be the derivative
of c(t):
i(t)=ddtc(t)=ddtekt=kekt
And so:
i(t)c(t)=kektekt=k=ln(2)Td
Which means:
i(t)=c(t)ln(2)Td
What does this look like? Here’s a chart of weekly incidence at the
time when cumulative incidence reaches 1%:
For example, if it’s doubling weekly then when 1% of people have ever
been infected 0.69% of people became infected in the last seven days, representing
69% of people who have ever been infected. If it’s doubling every
three weeks, then when 1% of people have ever been infected 0.23% of
people became infected this week, 23% of cumulative infections.
Is this really right, though? Let’s check our work with a bit of
very simple simulation:
def simulate(doubling_period_weeks):
cumulative_infection_threshold = 0.01
initial_weekly_incidence = 0.000000001
cumulative_infections = 0
current_weekly_incidence = 0
week = 0
while cumulative_infections < \
cumulative_infection_threshold:
week += 1
current_weekly_incidence = \
initial_weekly_incidence * 2**(
day/doubling_period_weeks)
cumulative_infections += \
current_weekly_incidence
return current_weekly_incidence
for f in range(50, 500):
doubling_period_weeks = f / 100
print(doubling_period_weeks,
simulate(doubling_period_weeks))
This looks like:
The simulated line is jagged, especially for short doubling periods,
but that’s not especially meaningful: it comes from running the
calculation a week at a time and how some weeks will be just above or
just below the (arbitrary) 1% goal.
Weekly Incidence vs Cumulative Infections
Link post
Imagine you have a goal of identifying a novel disease by the time some small fraction of the population has been infected. Many of the signs you might use to detect something unusual, however, such as doctor visits or shedding into wastewater, will depend on the number of people currently infected. How do these relate?
Bottom line: if we limit our consideration to the time before anyone has noticed something unusual, where people aren’t changing their behavior to avoid the disease, the vast majority of people are still susceptible, and spread is likely approximately exponential, then:
incidence=cumulative infectionsln(2)doubling time
Let’s derive this! We’ll call “cumulative infections” c(t), and “doubling time” Td. So here’s cumulative infections at time t:
c(t)=2tTd
The math will be easier with natural exponents, so let’s define k=ln(2)Td and switch our base:
ekt
Let’s call “incidence” i(t), which will be the derivative of c(t):
i(t)=ddtc(t)=ddtekt=kekt
And so:
i(t)c(t)=kektekt=k=ln(2)Td
Which means: i(t)=c(t)ln(2)Td
What does this look like? Here’s a chart of weekly incidence at the time when cumulative incidence reaches 1%:
For example, if it’s doubling weekly then when 1% of people have ever been infected 0.69% of people became infected in the last seven days, representing 69% of people who have ever been infected. If it’s doubling every three weeks, then when 1% of people have ever been infected 0.23% of people became infected this week, 23% of cumulative infections.
Is this really right, though? Let’s check our work with a bit of very simple simulation:
This looks like:
The simulated line is jagged, especially for short doubling periods, but that’s not especially meaningful: it comes from running the calculation a week at a time and how some weeks will be just above or just below the (arbitrary) 1% goal.
Comment via: facebook, mastodon