clearer epistemic status tags for the different claims....
I find it very hard, possibly impossible, to do the things you ask in this bullet point and synthesis in the same post. If I was going to do that it would be on a per-paper basis: for each paper list the claims and how well supported they are.
Generally, what research do you wish had existed, that would have better informed you here?
This seems interesting and fun to write to me. It might also be worth going over my favorite studies.
I find it very hard, possibly impossible, to do the things you ask in this bullet point and synthesis in the same post
Hard because of limitations on written word / UX, or intellectual difficulties with processing that class of information in the same pass that you process the synthesis type of information?
(Re: UX – I think it’d work best if we had a functioning side-note system. In the meanwhile, something that I think would work is to give each claim a rough classification of “high credence, medium or low”, including a link to a footnote that explains some of the detais)
Data points from papers can either contribute directly to predictions (e.g. we measured it and gains from colocation drop off at 30m), or to forming a model that makes predictions (e.g. the diagram). Credence levels for the first kind feel fine, but like a category error for model-born predictions . It’s not quite true that the model succeeds or fails as a unit, because some models are useful in some arenas and not in others, but the thing to evaluate is definitely the model, not the individual predictions.
I can see talking about what data would make me change my model and how that would change predictions, which may be isomorphic to what you’re suggesting.
I find it very hard, possibly impossible, to do the things you ask in this bullet point and synthesis in the same post. If I was going to do that it would be on a per-paper basis: for each paper list the claims and how well supported they are.
This seems interesting and fun to write to me. It might also be worth going over my favorite studies.
Hard because of limitations on written word / UX, or intellectual difficulties with processing that class of information in the same pass that you process the synthesis type of information?
(Re: UX – I think it’d work best if we had a functioning side-note system. In the meanwhile, something that I think would work is to give each claim a rough classification of “high credence, medium or low”, including a link to a footnote that explains some of the detais)
Data points from papers can either contribute directly to predictions (e.g. we measured it and gains from colocation drop off at 30m), or to forming a model that makes predictions (e.g. the diagram). Credence levels for the first kind feel fine, but like a category error for model-born predictions . It’s not quite true that the model succeeds or fails as a unit, because some models are useful in some arenas and not in others, but the thing to evaluate is definitely the model, not the individual predictions.
I can see talking about what data would make me change my model and how that would change predictions, which may be isomorphic to what you’re suggesting.
The UI would also be a pain.