This runs into the same problem: unless you’re already an expert, you can’t distinguish actually-useful research from the piles of completely useless research (much of it by relatively high-status researchers).
An example close to LW’s original goals: imagine an EA five years ago, wanting to donate to research on safe/friendly AI. They hear somebody argue about how important it is for AI research to be open-source so that the benefits of AI can be reaped by everyone. They’re convinced, and donate to a group trying to create widely-available versions of cutting-edge algorithms. From an X-risk standpoint, they’ve probably done close-to-nothing at best, and there’s an argument to be made that their impact was net harmful.
One needs to already have some amount of expertise in order to distinguish useful research to fund.
Can you come up with an example that isn’t AI? Most fields aren’t rife with infohazards, and 20% certainty of funding the best research will just divide your impact by a factor 5, which could still be good enough if you’ve got millions.
For what it’s worth, given the scenario that you’ve at least got enough to fund multiple AI researchers and your goal is purely to fix AI, I concede your point.
How about cancer research? This page lists rate of success of clinical trials in different subfields; oncology clinical trials have a success rate of around 4%. I would also guess that a large chunk of the “successes” in fact do basically-nothing and made it through largely by being the one-in-twenty which hit 95% significance by chance, or managed to p-hack, or the like. From an inside view, most cancer research I’ve seen indeed looks pretty unhelpful based on my understanding of biology and how-science-works in general (and this goes double for any cancer research “using machine learning”, which is a hot subfield).
More generally: we live in a high-dimensional world. Figuring out “which direction to search in” is usually a much more taut constraint than having the resources to search. Brute-force searching a high-dimensional space requires resources exponential in the dimension of the space.
Combine that with misaligned incentives for researchers, and our default expectation should usually be that finding the right researchers to fund is more of a constraint than resources.
This runs into the same problem: unless you’re already an expert, you can’t distinguish actually-useful research from the piles of completely useless research (much of it by relatively high-status researchers).
An example close to LW’s original goals: imagine an EA five years ago, wanting to donate to research on safe/friendly AI. They hear somebody argue about how important it is for AI research to be open-source so that the benefits of AI can be reaped by everyone. They’re convinced, and donate to a group trying to create widely-available versions of cutting-edge algorithms. From an X-risk standpoint, they’ve probably done close-to-nothing at best, and there’s an argument to be made that their impact was net harmful.
One needs to already have some amount of expertise in order to distinguish useful research to fund.
Can you come up with an example that isn’t AI? Most fields aren’t rife with infohazards, and 20% certainty of funding the best research will just divide your impact by a factor 5, which could still be good enough if you’ve got millions.
For what it’s worth, given the scenario that you’ve at least got enough to fund multiple AI researchers and your goal is purely to fix AI, I concede your point.
How about cancer research? This page lists rate of success of clinical trials in different subfields; oncology clinical trials have a success rate of around 4%. I would also guess that a large chunk of the “successes” in fact do basically-nothing and made it through largely by being the one-in-twenty which hit 95% significance by chance, or managed to p-hack, or the like. From an inside view, most cancer research I’ve seen indeed looks pretty unhelpful based on my understanding of biology and how-science-works in general (and this goes double for any cancer research “using machine learning”, which is a hot subfield).
More generally: we live in a high-dimensional world. Figuring out “which direction to search in” is usually a much more taut constraint than having the resources to search. Brute-force searching a high-dimensional space requires resources exponential in the dimension of the space.
Combine that with misaligned incentives for researchers, and our default expectation should usually be that finding the right researchers to fund is more of a constraint than resources.