I’d like a better model of possible infection and death trajectories than the models driving discussion at present. I think we might be capable of that.
Metaculus only represents the outputs of people’s models. There’s been a lot of talk in the rationalsphere criticizing overly simplistic/rigid/overconfident models, but little explicit discussion of what a better one might look like. It would be great if such models could be done through Guesstimate, but still valuable in other formats as long as they’re easily readable.
Epidemicforecasting.org is doing some of this, though sadly with the model behind the project not being super transparent (since it’s based on some commercial epidemiological modeling software).
It would take extraordinary evidence to convince me that LW can do better at applied cryptography than the current standard of cryptography. That’s because it would take extraordinary evidence to convince me that LW can do better at any well-developed field than the current standard.
Therefore, I would need extraordinary evidence to convince me that LW can do better at epidemiology than the current standard of epidemiology.
Why should I believe that LW can recognize and promote to attention the people who are currently better epidemiologists than the current experts, with at least 50% specificity?
Academics are indeed very smart, but under time pressure they have many additional constraints, most particularly the need to have everything pass peer review (now or later), which entails some unfortunate requirements like
“tighter confidence intervals look better”
“a single model is more justifiable than an ensemble”
“you can justify a handpicked parameter more easily than a handpicked distribution over that parameter”
“if your model looks at all like someone else’s you’d better cite them, so either keep things in spherical cow territory or do a long literature search while people are dying”
We’re not constrained by the same factors, and so it’s perhaps possible to do better.
I have no doubt that LW is more than capable of making models beyond *my* ability to find fault with.
And I am actually confused that “Our models won’t pass peer review” is being used as evidence of higher quality.
Is there a betting market where I can take the house position against modelers who think they can outperform some publicly available professional epidemiologist’s model?
I’d like a better model of possible infection and death trajectories than the models driving discussion at present. I think we might be capable of that.
Metaculus only represents the outputs of people’s models. There’s been a lot of talk in the rationalsphere criticizing overly simplistic/rigid/overconfident models, but little explicit discussion of what a better one might look like. It would be great if such models could be done through Guesstimate, but still valuable in other formats as long as they’re easily readable.
Note that Metaculus also estimates things that are likely inputs to models e.g. “the” IFR.
Epidemicforecasting.org is doing some of this, though sadly with the model behind the project not being super transparent (since it’s based on some commercial epidemiological modeling software).
It would take extraordinary evidence to convince me that LW can do better at applied cryptography than the current standard of cryptography. That’s because it would take extraordinary evidence to convince me that LW can do better at any well-developed field than the current standard.
Therefore, I would need extraordinary evidence to convince me that LW can do better at epidemiology than the current standard of epidemiology.
Why should I believe that LW can recognize and promote to attention the people who are currently better epidemiologists than the current experts, with at least 50% specificity?
Nobody’s forcing you to help with this! And if you just want to point out why particular proposed models are bad, that’s a good way to help as well.
Academics are indeed very smart, but under time pressure they have many additional constraints, most particularly the need to have everything pass peer review (now or later), which entails some unfortunate requirements like
“tighter confidence intervals look better”
“a single model is more justifiable than an ensemble”
“you can justify a handpicked parameter more easily than a handpicked distribution over that parameter”
“if your model looks at all like someone else’s you’d better cite them, so either keep things in spherical cow territory or do a long literature search while people are dying”
We’re not constrained by the same factors, and so it’s perhaps possible to do better.
I have no doubt that LW is more than capable of making models beyond *my* ability to find fault with.
And I am actually confused that “Our models won’t pass peer review” is being used as evidence of higher quality.
Is there a betting market where I can take the house position against modelers who think they can outperform some publicly available professional epidemiologist’s model?