We have interventions such as Taffix or prophylatic ivermectin that have large effects in studies but where we don’t have a lot of good studies to have good certainty that the effects generalize. The meta review on Ivermectin gives ‘Low-certainty evidence found that ivermectin prophylaxis reduced COVID-19 infection by an average 86% (95% confidence interval 79%–91%)’ the Taffix study gives ‘Odds ratio for infection among Taffix users was 0.22, a reduction of 78% (95%CI 1%–95%).’
If we would do all the things where there are studies that suggest large effects or at least run additional studies to verify that the effect exists, then saying that “we know masks work” would be a lot more responsible then in a world where we don’t do those things.
- there are many moderately strong factors which have an impact like 20% (masks, weaker NPIs, increased humidity,...)
If you buy that model there’s still the question why we force children in schools to wear masks but don’t force the schools to control their humidity levels. Humidity level control also have higher then 20% claimed effectiveness. If I for example Google “us school humidity” I find no hit about COVID-19 but websites that claim “Temporary humidity control is vital to keeping illnesses at bay in schools. Students in poorly ventilated rooms suffer 70% more respiratory illnesses.”
A more dakka approach would be to do all the things but that’s not what we are doing.
On a personal level it makes sense to take your taffix, ivermectin, wear a properly fitted respirator, do air quality control in your flat and vaccinate but the overall debate is still important.
Uncovering even the moderate sized effects requires modelling, and failure to see them clearly e.g. in regional comparisons should not make anyone any update apart from rejecting obviously implausibly large effect sizes (e.g. mask reducing transmission by 60%)
Failure to see an effect doesn’t mean that effects are disproven but it does mean that we don’t know whether the effect exists.
Why we don’t have more studies on Taffix or increasing humidity in schools is not a matter of much attention seem like Inadequate equilibria type of problem, which seem quite distinct from evaluating existing evidence on some topic.
Failure to see an effect doesn’t mean that effects are disproven but it does mean that we don’t know whether the effect exists.
Sorry for too much brevity before.
No. Per Bayes theorem, failure to see an effect in an analysis/experiment where you would expect to see no effect no matter if the effect exists or not should make you to stay with the prior.
In the specific case of this topic and post—someone looking at masks clearly will likely have a prior “they work, but they aren’t a miracle cure”. More precisely, this could be expressed roughly as an expected effect distribution in R reduction space with almost all of the mass centered somewhere between 5% a 50%. Different reasonable observers looking at different data will likely arrive at somewhat different maximum likelihood estimates and shapes of the distribution, but they will have very little probability mass on no effect, or harm, and very little on large effect.
Should someone with a prior from this class update the prior, based on the evidence consisting of the analysis by Mike Harris?
Not at all! Posterior should stay the same.
Should someone with this prior update the prior, based on reading the referenced paper?
In my view yes, I think bayesians should update away even more from very low effects (like 5%) or very high effects (like 50%).
Why we don’t have more studies on Taffix or increasing humidity in schools is not a matter of much attention seem like Inadequate equilibria type of problem, which seem quite distinct from evaluating existing evidence on some topic.
The point is that the evidence we have for Taffix and the evidence for humidity is better then the evidence we have for masks.
No. Per Bayes theorem, failure to see an effect in an analysis/experiment where you would expect to see no effect no matter if the effect exists or not should make you to stay with the prior.
The prior before we run studies is that we don’t know whether or not masks work. Studies are the only way to move from “We think it’s likely that masks work” to “We know that masks work”.
We have interventions such as Taffix or prophylatic ivermectin that have large effects in studies but where we don’t have a lot of good studies to have good certainty that the effects generalize. The meta review on Ivermectin gives ‘Low-certainty evidence found that ivermectin prophylaxis reduced COVID-19 infection by an average 86% (95% confidence interval 79%–91%)’ the Taffix study gives ‘Odds ratio for infection among Taffix users was 0.22, a reduction of 78% (95%CI 1%–95%).’
If we would do all the things where there are studies that suggest large effects or at least run additional studies to verify that the effect exists, then saying that “we know masks work” would be a lot more responsible then in a world where we don’t do those things.
If you buy that model there’s still the question why we force children in schools to wear masks but don’t force the schools to control their humidity levels. Humidity level control also have higher then 20% claimed effectiveness. If I for example Google “us school humidity” I find no hit about COVID-19 but websites that claim “Temporary humidity control is vital to keeping illnesses at bay in schools. Students in poorly ventilated rooms suffer 70% more respiratory illnesses.”
A more dakka approach would be to do all the things but that’s not what we are doing.
On a personal level it makes sense to take your taffix, ivermectin, wear a properly fitted respirator, do air quality control in your flat and vaccinate but the overall debate is still important.
Failure to see an effect doesn’t mean that effects are disproven but it does mean that we don’t know whether the effect exists.
Why we don’t have more studies on Taffix or increasing humidity in schools is not a matter of much attention seem like Inadequate equilibria type of problem, which seem quite distinct from evaluating existing evidence on some topic.
Sorry for too much brevity before.
No. Per Bayes theorem, failure to see an effect in an analysis/experiment where you would expect to see no effect no matter if the effect exists or not should make you to stay with the prior.
In the specific case of this topic and post—someone looking at masks clearly will likely have a prior “they work, but they aren’t a miracle cure”. More precisely, this could be expressed roughly as an expected effect distribution in R reduction space with almost all of the mass centered somewhere between 5% a 50%. Different reasonable observers looking at different data will likely arrive at somewhat different maximum likelihood estimates and shapes of the distribution, but they will have very little probability mass on no effect, or harm, and very little on large effect.
Should someone with a prior from this class update the prior, based on the evidence consisting of the analysis by Mike Harris?
Not at all! Posterior should stay the same.
Should someone with this prior update the prior, based on reading the referenced paper?
In my view yes, I think bayesians should update away even more from very low effects (like 5%) or very high effects (like 50%).
The point is that the evidence we have for Taffix and the evidence for humidity is better then the evidence we have for masks.
The prior before we run studies is that we don’t know whether or not masks work. Studies are the only way to move from “We think it’s likely that masks work” to “We know that masks work”.