Orienting a bit more around the “the state of management research is shitty” issue
Can you say more about this? That seems like a very valuable but completely different post, which I imagine would take an order of magnitude more effort than investigation into a single area.
Yeah, there’s definitely a version of this that is just a completely different post. I think Habryka had his own opinions here that might be worth sharing.
Some off the cuff thoughts:
Within scope for something “close to the original post”, I think it’d be useful to have:
clearer epistemic status tags for the different claims.
Which claims are based on out of date research? How old is the research?
Which are based on shoddy research?
What’s your credence for each claim?
More generally, how much stock should a startup founder place in this post? In your opinion, does the state of this research rise to the level of “you should most likely follow this post’s advice?” or is it more like “eh, read this post to get a sense of what considerations might be at play but mostly rely on your own thinking?”
Broader scope, maybe it’s own entire post (although I think there’s room for a “couple paragraphs version” and a “entire longterm research project” version)
Generally, what research do you wish had existed, that would have better informed you here?
Are there are particular experiments or case studies that seemed (relatively) easy to replicate, that just needed to be run again in the modern era with 21st century communication tech?
clearer epistemic status tags for the different claims....
I find it very hard, possibly impossible, to do the things you ask in this bullet point and synthesis in the same post. If I was going to do that it would be on a per-paper basis: for each paper list the claims and how well supported they are.
Generally, what research do you wish had existed, that would have better informed you here?
This seems interesting and fun to write to me. It might also be worth going over my favorite studies.
I find it very hard, possibly impossible, to do the things you ask in this bullet point and synthesis in the same post
Hard because of limitations on written word / UX, or intellectual difficulties with processing that class of information in the same pass that you process the synthesis type of information?
(Re: UX – I think it’d work best if we had a functioning side-note system. In the meanwhile, something that I think would work is to give each claim a rough classification of “high credence, medium or low”, including a link to a footnote that explains some of the detais)
Data points from papers can either contribute directly to predictions (e.g. we measured it and gains from colocation drop off at 30m), or to forming a model that makes predictions (e.g. the diagram). Credence levels for the first kind feel fine, but like a category error for model-born predictions . It’s not quite true that the model succeeds or fails as a unit, because some models are useful in some arenas and not in others, but the thing to evaluate is definitely the model, not the individual predictions.
I can see talking about what data would make me change my model and how that would change predictions, which may be isomorphic to what you’re suggesting.
Can you say more about this? That seems like a very valuable but completely different post, which I imagine would take an order of magnitude more effort than investigation into a single area.
Yeah, there’s definitely a version of this that is just a completely different post. I think Habryka had his own opinions here that might be worth sharing.
Some off the cuff thoughts:
Within scope for something “close to the original post”, I think it’d be useful to have:
clearer epistemic status tags for the different claims.
Which claims are based on out of date research? How old is the research?
Which are based on shoddy research?
What’s your credence for each claim?
More generally, how much stock should a startup founder place in this post? In your opinion, does the state of this research rise to the level of “you should most likely follow this post’s advice?” or is it more like “eh, read this post to get a sense of what considerations might be at play but mostly rely on your own thinking?”
Broader scope, maybe it’s own entire post (although I think there’s room for a “couple paragraphs version” and a “entire longterm research project” version)
Generally, what research do you wish had existed, that would have better informed you here?
Are there are particular experiments or case studies that seemed (relatively) easy to replicate, that just needed to be run again in the modern era with 21st century communication tech?
I find it very hard, possibly impossible, to do the things you ask in this bullet point and synthesis in the same post. If I was going to do that it would be on a per-paper basis: for each paper list the claims and how well supported they are.
This seems interesting and fun to write to me. It might also be worth going over my favorite studies.
Hard because of limitations on written word / UX, or intellectual difficulties with processing that class of information in the same pass that you process the synthesis type of information?
(Re: UX – I think it’d work best if we had a functioning side-note system. In the meanwhile, something that I think would work is to give each claim a rough classification of “high credence, medium or low”, including a link to a footnote that explains some of the detais)
Data points from papers can either contribute directly to predictions (e.g. we measured it and gains from colocation drop off at 30m), or to forming a model that makes predictions (e.g. the diagram). Credence levels for the first kind feel fine, but like a category error for model-born predictions . It’s not quite true that the model succeeds or fails as a unit, because some models are useful in some arenas and not in others, but the thing to evaluate is definitely the model, not the individual predictions.
I can see talking about what data would make me change my model and how that would change predictions, which may be isomorphic to what you’re suggesting.
The UI would also be a pain.