A thing I’d be interested in (but I acknowledge it’s a bit tricky to navigate), is somehow better leveraging the wisdom of crowds here. I like that the tool as-is is pretty clean and simple, and I like that you provide the raw spreadsheet for people to go tweak the variables to match their own epistemics.
It’d be nice if I could see how much disagreement there was on the risk analysis of individual components, and ideally see what people’s reasoning was.
There’s a lot of trickiness in “if you just let anyone submit disagreeing statements, you’re opening yourself up to managing arguments about whether so-and-so is a crackpot or whatever” and that sounds like a huge pain, I’m not sure if there’s a way to sidestep that.
But, my ideal version of this lets me see different estimates with associated reasoning, and then make some kind of judgment call on my own of whether to go with microcovid.org’s default estimate, or wisdom of crowds, or subset-of-wisdom-of-crowds if I trust some people’s judgment more than others.
To me microCOVID’s defaults seem close enough to the truth that the ideal version you describe wouldn’t provide too much marginal value.
Especially since, at least to me, the value is mostly in knowing what activities I will/won’t do rather than nailing down the precise number of microCOVIDs. Eg. knowing that eating at a restaurant inside is 8,500 microCOVIDs instead of 10,000 wouldn’t be enough to get me to eat at a restaurant inside, so it doesn’t really matter to me whether the real number is 8,500 or 10,000. However, given the wide confidence intervals, maybe this point doesn’t have too much weight.
There’s a lot of trickiness in “if you just let anyone submit disagreeing statements, you’re opening yourself up to managing arguments about whether so-and-so is a crackpot or whatever” and that sounds like a huge pain, I’m not sure if there’s a way to sidestep that.
I don’t think it’d really be possible to side step it 100%, but if you eg. only accept statements from people with PhDs, maybe that’d be good enough. Eg. maybe the benefit of the extra inputs would outweigh the fact that the sources aren’t fully vetted.
Thanks. This is great.
A thing I’d be interested in (but I acknowledge it’s a bit tricky to navigate), is somehow better leveraging the wisdom of crowds here. I like that the tool as-is is pretty clean and simple, and I like that you provide the raw spreadsheet for people to go tweak the variables to match their own epistemics.
It’d be nice if I could see how much disagreement there was on the risk analysis of individual components, and ideally see what people’s reasoning was.
There’s a lot of trickiness in “if you just let anyone submit disagreeing statements, you’re opening yourself up to managing arguments about whether so-and-so is a crackpot or whatever” and that sounds like a huge pain, I’m not sure if there’s a way to sidestep that.
But, my ideal version of this lets me see different estimates with associated reasoning, and then make some kind of judgment call on my own of whether to go with microcovid.org’s default estimate, or wisdom of crowds, or subset-of-wisdom-of-crowds if I trust some people’s judgment more than others.
To me microCOVID’s defaults seem close enough to the truth that the ideal version you describe wouldn’t provide too much marginal value.
Especially since, at least to me, the value is mostly in knowing what activities I will/won’t do rather than nailing down the precise number of microCOVIDs. Eg. knowing that eating at a restaurant inside is 8,500 microCOVIDs instead of 10,000 wouldn’t be enough to get me to eat at a restaurant inside, so it doesn’t really matter to me whether the real number is 8,500 or 10,000. However, given the wide confidence intervals, maybe this point doesn’t have too much weight.
I don’t think it’d really be possible to side step it 100%, but if you eg. only accept statements from people with PhDs, maybe that’d be good enough. Eg. maybe the benefit of the extra inputs would outweigh the fact that the sources aren’t fully vetted.