Let’s say X% get hospitalized within 2 weeks. What’s the highest value of X that would say variolation is a good idea? Keep in mind that:
The demographics of your sample aren’t the same as the general population, hopefully you didn’t include many 60+ folks.
You don’t know how many botched the protocol. Could botch in any direction (dose too high, too low, or no dose at all).
You don’t know the hospitalization rate after contacting corona in normal ways, which can also be low dose. Many people don’t get tested now and the epidemic is spreading.
Let’s say X% get hospitalized within 2 weeks. What’s the highest value of X that would say variolation is a good idea?
Roughly X ⇐ 1%. Something like 1⁄10 to 1⁄30 of the average 2-week hospitalization rate for a similar data set of non-study people is the success case. Assuming that the total study size has a sample of 10k+ participants, it’s not that hard to get a strong signal of success out of the data.
What’s special about this situation, besides the desperate emergency, is that the effect size we’re hoping to detect here is nothing short of huge.
You don’t know how many botched the protocol.
If video documentation of the full protocol is required to count someone in the study, the protocol accuracy could probably get within a 2x factor of having a professional administering it in meatspace.
You don’t know the hospitalization rate after contacting corona in normal ways, which can also be low dose. Many people don’t get tested now and the epidemic is spreading.
Aren’t we confident that the hospitalization rate from getting it normal ways is 2-20%, and isn’t that enough to go on?
Yes, if the potential effect size is large, you can get away with imprecise answers to some questions. But if there are many questions, at some point your “imprecision budget” will be spent. For example, will you be able to detect if your dosing leads to later hospitalization instead of no hospitalization? Or it weakens immunity instead of strengthening it?
I’m pretty optimistic that we have enough imprecision budget to work with if we put our heads together. Unfortunately, this comment section hasn’t been very lively so far.
Let’s say X% get hospitalized within 2 weeks. What’s the highest value of X that would say variolation is a good idea? Keep in mind that:
The demographics of your sample aren’t the same as the general population, hopefully you didn’t include many 60+ folks.
You don’t know how many botched the protocol. Could botch in any direction (dose too high, too low, or no dose at all).
You don’t know the hospitalization rate after contacting corona in normal ways, which can also be low dose. Many people don’t get tested now and the epidemic is spreading.
Etc.
Roughly X ⇐ 1%. Something like 1⁄10 to 1⁄30 of the average 2-week hospitalization rate for a similar data set of non-study people is the success case. Assuming that the total study size has a sample of 10k+ participants, it’s not that hard to get a strong signal of success out of the data.
What’s special about this situation, besides the desperate emergency, is that the effect size we’re hoping to detect here is nothing short of huge.
If video documentation of the full protocol is required to count someone in the study, the protocol accuracy could probably get within a 2x factor of having a professional administering it in meatspace.
Aren’t we confident that the hospitalization rate from getting it normal ways is 2-20%, and isn’t that enough to go on?
Yes, if the potential effect size is large, you can get away with imprecise answers to some questions. But if there are many questions, at some point your “imprecision budget” will be spent. For example, will you be able to detect if your dosing leads to later hospitalization instead of no hospitalization? Or it weakens immunity instead of strengthening it?
I’m pretty optimistic that we have enough imprecision budget to work with if we put our heads together. Unfortunately, this comment section hasn’t been very lively so far.