Can you name one kind of research that wouldn’t have counterfactually happened if alignment was publicly funded?
Wrong question.
The parable of the leprechaun is relevant here:
One day, a farmer managed to catch a leprechaun. As is usual for these tales, the leprechaun offered to show the farmer where the leprechaun’s gold was buried, in exchange for the leprechaun’s freedom. The farmer agreed. So the leprechaun led the farmer deep into the woods, eventually stopped at a tree, and said “my gold is buried under this tree”.
Unfortunately, the farmer had not thought to bring a shovel. So, the farmer tied a ribbon around the tree to mark the spot, and the leprechaun agreed not to remove it. Then the farmer returned home to fetch a shovel.
When the farmer returned, he found a ribbon tied around every tree in the forest. He never did find the gold.
This is the problem which plagues many academic fields (e.g. pre-crisis psychology is a now-clear example). It’s not mainly that good research goes unfunded, it’s that there’s so much crap that the good work is (a) hard to find, and (b) not differentially memetically successful.
A little crap research mostly doesn’t matter, so long as the competent researchers can still do their thing. But if the volume of crap reaches the point where competent researchers have trouble finding each other, or new people are mostly onboarded into crap research, or external decision-makers can’t defer to a random “expert” in the field without usually getting a bunch of crap, then all that crap research has important negative effects.
It’s a cute story John but do you have more than an anecdotal leprechaun?
I think the simplest model (so the one we should default to by Occam’s mighty Razor) is that whether good research will be done in a field is mostly tied to
intrinisic features of research in this area (i.e. how much feedback from reality, noisy vs nonnoisy, political implication, and lots more I don’t care to name)
initial fieldbuilding driving who self-selects into the research field
Number of Secure funded research positions
the first is independent of funding source - I don’t think we have much evidence that the second would be much worse for public funding as opposed to private funding.
in absence of strong evidence, I humbly suggest we should default to the simplest model in which :
more money & more secure positions → more people will be working on the problem
The fact that France has a significant larger number of effectively-tenured positions per capita than most other nations, entirely publicly funded, is almost surely one of the most important factors in its (continued) dominance in pure mathematics as evidenced by its large share of Fields medals (13/66 versus 15⁄66 for the US). I observe in passing that your own research program is far more akin to academic math than cancer research.
As for the position that you’d rather have no funding as opposed to public funding is … well let us be polite and call it … American.
(Probably not going to respond further here, but I wanted to note that this comment really hit the perfect amount of sarcasm and combativity for me personally; I enjoyed it.)
Alignment is almost exactly opposite of abstract math?
Math has a good quality of being checkable—you can get a paper, follow all its content and become sure that content is valid. Alignment research paper can have valid math, but be inadequate in questions such as “is this math even related to reality?”, which are much harder to check.
Wentworths own work is closest to academic math/theoretical physics, perhaps to philosophy.
are you claiming we have no way of telling good (alignment) research from bad?
And if we do, why would private funding be better at figuring this out than public funding?
To be somewhat more fair, there are probably thousands of problems with the property that they are much easier to check than they are to solve, and while alignment research is maybe not this, I do think that there’s a general gap between verifying a solution and actually solving the problem.
Another interesting class are problems that are easy to generate but hard to verify.
John Wentworth told me the following delightfully simple example
Generating a Turing machine program that halts is easy, verifying that an arbitrary TM program halts is undecidable.
But if the volume of crap reaches the point where competent researchers have trouble finding each other, or new people are mostly onboarded into crap research, or external decision-makers can’t defer to a random “expert” in the field without usually getting a bunch of crap, then all that crap research has important negative effects
I agree with your observation. The problem is that many people are easily influenced by others, rather than critically evaluating whether the project they’re participating in is legitimate or if their work is safe to publish. It seems that most have lost the ability to listen to their own judgment and assess what is rational to do.
Wrong question.
The parable of the leprechaun is relevant here:
This is the problem which plagues many academic fields (e.g. pre-crisis psychology is a now-clear example). It’s not mainly that good research goes unfunded, it’s that there’s so much crap that the good work is (a) hard to find, and (b) not differentially memetically successful.
A little crap research mostly doesn’t matter, so long as the competent researchers can still do their thing. But if the volume of crap reaches the point where competent researchers have trouble finding each other, or new people are mostly onboarded into crap research, or external decision-makers can’t defer to a random “expert” in the field without usually getting a bunch of crap, then all that crap research has important negative effects.
It’s a cute story John but do you have more than an anecdotal leprechaun?
I think the simplest model (so the one we should default to by Occam’s mighty Razor) is that whether good research will be done in a field is mostly tied to
intrinisic features of research in this area (i.e. how much feedback from reality, noisy vs nonnoisy, political implication, and lots more I don’t care to name)
initial fieldbuilding driving who self-selects into the research field
Number of Secure funded research positions
the first is independent of funding source - I don’t think we have much evidence that the second would be much worse for public funding as opposed to private funding.
in absence of strong evidence, I humbly suggest we should default to the simplest model in which :
more money & more secure positions → more people will be working on the problem
The fact that France has a significant larger number of effectively-tenured positions per capita than most other nations, entirely publicly funded, is almost surely one of the most important factors in its (continued) dominance in pure mathematics as evidenced by its large share of Fields medals (13/66 versus 15⁄66 for the US). I observe in passing that your own research program is far more akin to academic math than cancer research.
As for the position that you’d rather have no funding as opposed to public funding is … well let us be polite and call it … American.
(Probably not going to respond further here, but I wanted to note that this comment really hit the perfect amount of sarcasm and combativity for me personally; I enjoyed it.)
Alignment is almost exactly opposite of abstract math? Math has a good quality of being checkable—you can get a paper, follow all its content and become sure that content is valid. Alignment research paper can have valid math, but be inadequate in questions such as “is this math even related to reality?”, which are much harder to check.
That may be so.
Wentworths own work is closest to academic math/theoretical physics, perhaps to philosophy.
are you claiming we have no way of telling good (alignment) research from bad? And if we do, why would private funding be better at figuring this out than public funding?
To be somewhat more fair, there are probably thousands of problems with the property that they are much easier to check than they are to solve, and while alignment research is maybe not this, I do think that there’s a general gap between verifying a solution and actually solving the problem.
The canonical examples are NP problems.
Another interesting class are problems that are easy to generate but hard to verify.
John Wentworth told me the following delightfully simple example Generating a Turing machine program that halts is easy, verifying that an arbitrary TM program halts is undecidable.
Yep, I was thinking about NP problems, though #P problems for the counting version would count as well.
I agree with your observation. The problem is that many people are easily influenced by others, rather than critically evaluating whether the project they’re participating in is legitimate or if their work is safe to publish. It seems that most have lost the ability to listen to their own judgment and assess what is rational to do.