When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.
I would love to see John, or anyone with an interest in the subject, do an explainer on all the ways science organizes and coordinates to solve problems.
In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.
When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.
Tho if you take analyses like Braden’s seriously, quite possibly these filtering efforts have negative value, in that they are more likely to favor projects supported by insiders and senior people, who have historically been bad at predicting where the next good things will come from. “Science advances one funeral at a time,” in a way that seems detectable from analyzing the literature.
This isn’t to say that planning is worthless, and that no one can see the future. It’s to say that you can’t buy the ability to buy the right things; you have to develop that sort of judgment on your own, and all the hard evidence comes too late to be useful.
I’m starting to read Braden. The thing is, that if Braden’s analysis is true, then either:
We can filter for the right people, we’re just doing it wrong. We need to empower a few senior scientists who no longer have a dog in the fight to select who they think should be endowed with money for unconstrained research. Money can buy knowledge if you do it right.
We truly can’t filter for the right ideas. Either rich people need to do research, researchers need to get rich, or we need to just randomly dump money on researchers and hope that a few of them turn out to be the next Einstein.
I think there’s a fairly rigorous, step-by-step, logical way to ground this whole argument we’re having, but I think it’s suffering from a lack of precision somehow...
There seems to be a lack of knowledge in the people who fund science about how to structure the funding in an effective way.
There are some experts who think that they have an alternative proposal that leads to a much better return on investment. Those experts have some arguments for their position but it’s not straightforward to know which expert is right and that judgement can’t be brought.
I suspect being good at finding better scientists is very close to having a complete theory of scientific advancement and being able to automate the research itself.
The extreme form of that idea is If we could evaluate the quality of scientists, then we could fully computerize research. Since we cannot fully computerize research, we therefore have no ability to evaluate the quality of scientists.
The most valuable thing to do would be to observe what’s going on right now, and the possibilities we haven’t tried (or have abandoned). Insofar as we have credence in the “we know nothing” hypothesis, we should blindly dump money on random scientists. Our credence should never be zero, so this implies that some nonzero amount of random money-dumping is optimal.
I think this is true if you’re looking for near-perfect scientists but if you’re assessing current science to decide who to invest in there are lots of things you can do to get better at performing such assessments (e.g. here).
>In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.
How about a metaculus/prediction market for scientific advances given an investment in X person or project? (where people put stake into the success of a person or project?) is this susceptible to bad incentives?
I think the greater concern is that it’s hard to measure. And yes, you could imagine that owning shares against, say, the efficacy of a vaccine being above a certain level could be read as an incentive to sabotage the effort to develop it.
From the OP:
When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.
I would love to see John, or anyone with an interest in the subject, do an explainer on all the ways science organizes and coordinates to solve problems.
In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.
Tho if you take analyses like Braden’s seriously, quite possibly these filtering efforts have negative value, in that they are more likely to favor projects supported by insiders and senior people, who have historically been bad at predicting where the next good things will come from. “Science advances one funeral at a time,” in a way that seems detectable from analyzing the literature.
This isn’t to say that planning is worthless, and that no one can see the future. It’s to say that you can’t buy the ability to buy the right things; you have to develop that sort of judgment on your own, and all the hard evidence comes too late to be useful.
I’m starting to read Braden. The thing is, that if Braden’s analysis is true, then either:
We can filter for the right people, we’re just doing it wrong. We need to empower a few senior scientists who no longer have a dog in the fight to select who they think should be endowed with money for unconstrained research. Money can buy knowledge if you do it right.
We truly can’t filter for the right ideas. Either rich people need to do research, researchers need to get rich, or we need to just randomly dump money on researchers and hope that a few of them turn out to be the next Einstein.
I think there’s a fairly rigorous, step-by-step, logical way to ground this whole argument we’re having, but I think it’s suffering from a lack of precision somehow...
There seems to be a lack of knowledge in the people who fund science about how to structure the funding in an effective way.
There are some experts who think that they have an alternative proposal that leads to a much better return on investment. Those experts have some arguments for their position but it’s not straightforward to know which expert is right and that judgement can’t be brought.
I suspect being good at finding better scientists is very close to having a complete theory of scientific advancement and being able to automate the research itself.
The extreme form of that idea is If we could evaluate the quality of scientists, then we could fully computerize research. Since we cannot fully computerize research, we therefore have no ability to evaluate the quality of scientists.
The most valuable thing to do would be to observe what’s going on right now, and the possibilities we haven’t tried (or have abandoned). Insofar as we have credence in the “we know nothing” hypothesis, we should blindly dump money on random scientists. Our credence should never be zero, so this implies that some nonzero amount of random money-dumping is optimal.
I think this is true if you’re looking for near-perfect scientists but if you’re assessing current science to decide who to invest in there are lots of things you can do to get better at performing such assessments (e.g. here).
>In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.
How about a metaculus/prediction market for scientific advances given an investment in X person or project? (where people put stake into the success of a person or project?) is this susceptible to bad incentives?
I think the greater concern is that it’s hard to measure. And yes, you could imagine that owning shares against, say, the efficacy of a vaccine being above a certain level could be read as an incentive to sabotage the effort to develop it.