My understanding is that’s true for Rt (the effective reproduction rate) but R0 is a general “this is how contagious this virus is” measure.
In reality, varying proportions of the population are immune to any given disease at any given time. To account for this, the effective reproduction number Re is used, usually written as Rt, or the average number of new infections caused by a single infected individual at time t in the partially susceptible population.
I think you’re correct that the difference between R0 and Rt is that Rt takes into account the proportion of the population already immune.
However, R0 is still dependent on its environment. A completely naive (uninfected) population of hermits living in caves hundreds of miles distant from one another has an R0 of 0 for nearly anything. A completely naive population of immunocompromised packed-warehouse rave attendees would probably have an R0 of 100+ for measles.
I don’t know if there is another Rte type variable that tries to define the infectiveness of a disease given both the prevalence of immunity and the environment. Seems like most folks just kinda assume that environment other than immune proportion is constant when comparing R0/Rt figures.
Dan, both you and Elizabeth make good points here that I hadn’t given enough consideration to (I wish I could tag both of you in a comment somehow, but I’m not sure if that’s possible).
Yes, it is dependent on the population/community but also there’s several different ways to calculate it making it hard to compare not just between diseases but also between R0 calculations for given diseases… So… yeah that makes a straight-forward objective ranking of contagiousness a much more difficult task I suspected from the table in the article… it also makes talking about contagiousness objectively somewhat more difficult than I hoped.
R0 is generally a pretty good “this is how contagious this virus is” metric, and is dependent on the environment. If two diseases have different R0s in the same environment, the one with the higher R is more contagious (modulo something really weird). But environmental changes can affect R0 as much as they affect Rt.
My understanding is that’s true for Rt (the effective reproduction rate) but R0 is a general “this is how contagious this virus is” measure.
I think you’re correct that the difference between R0 and Rt is that Rt takes into account the proportion of the population already immune.
However, R0 is still dependent on its environment. A completely naive (uninfected) population of hermits living in caves hundreds of miles distant from one another has an R0 of 0 for nearly anything. A completely naive population of immunocompromised packed-warehouse rave attendees would probably have an R0 of 100+ for measles.
I don’t know if there is another Rte type variable that tries to define the infectiveness of a disease given both the prevalence of immunity and the environment. Seems like most folks just kinda assume that environment other than immune proportion is constant when comparing R0/Rt figures.
Dan, both you and Elizabeth make good points here that I hadn’t given enough consideration to (I wish I could tag both of you in a comment somehow, but I’m not sure if that’s possible).
Yes, it is dependent on the population/community but also there’s several different ways to calculate it making it hard to compare not just between diseases but also between R0 calculations for given diseases… So… yeah that makes a straight-forward objective ranking of contagiousness a much more difficult task I suspected from the table in the article… it also makes talking about contagiousness objectively somewhat more difficult than I hoped.
R0 is generally a pretty good “this is how contagious this virus is” metric, and is dependent on the environment. If two diseases have different R0s in the same environment, the one with the higher R is more contagious (modulo something really weird). But environmental changes can affect R0 as much as they affect Rt.