It’s a cute story John but do you have more than an anecdotal leprechaun?
I think the simplest model (so the one we should default to by Occam’s mighty Razor) is that whether good research will be done in a field is mostly tied to
intrinisic features of research in this area (i.e. how much feedback from reality, noisy vs nonnoisy, political implication, and lots more I don’t care to name)
initial fieldbuilding driving who self-selects into the research field
Number of Secure funded research positions
the first is independent of funding source - I don’t think we have much evidence that the second would be much worse for public funding as opposed to private funding.
in absence of strong evidence, I humbly suggest we should default to the simplest model in which :
more money & more secure positions → more people will be working on the problem
The fact that France has a significant larger number of effectively-tenured positions per capita than most other nations, entirely publicly funded, is almost surely one of the most important factors in its (continued) dominance in pure mathematics as evidenced by its large share of Fields medals (13/66 versus 15⁄66 for the US). I observe in passing that your own research program is far more akin to academic math than cancer research.
As for the position that you’d rather have no funding as opposed to public funding is … well let us be polite and call it … American.
(Probably not going to respond further here, but I wanted to note that this comment really hit the perfect amount of sarcasm and combativity for me personally; I enjoyed it.)
Alignment is almost exactly opposite of abstract math?
Math has a good quality of being checkable—you can get a paper, follow all its content and become sure that content is valid. Alignment research paper can have valid math, but be inadequate in questions such as “is this math even related to reality?”, which are much harder to check.
Wentworths own work is closest to academic math/theoretical physics, perhaps to philosophy.
are you claiming we have no way of telling good (alignment) research from bad?
And if we do, why would private funding be better at figuring this out than public funding?
To be somewhat more fair, there are probably thousands of problems with the property that they are much easier to check than they are to solve, and while alignment research is maybe not this, I do think that there’s a general gap between verifying a solution and actually solving the problem.
Another interesting class are problems that are easy to generate but hard to verify.
John Wentworth told me the following delightfully simple example
Generating a Turing machine program that halts is easy, verifying that an arbitrary TM program halts is undecidable.
It’s a cute story John but do you have more than an anecdotal leprechaun?
I think the simplest model (so the one we should default to by Occam’s mighty Razor) is that whether good research will be done in a field is mostly tied to
intrinisic features of research in this area (i.e. how much feedback from reality, noisy vs nonnoisy, political implication, and lots more I don’t care to name)
initial fieldbuilding driving who self-selects into the research field
Number of Secure funded research positions
the first is independent of funding source - I don’t think we have much evidence that the second would be much worse for public funding as opposed to private funding.
in absence of strong evidence, I humbly suggest we should default to the simplest model in which :
more money & more secure positions → more people will be working on the problem
The fact that France has a significant larger number of effectively-tenured positions per capita than most other nations, entirely publicly funded, is almost surely one of the most important factors in its (continued) dominance in pure mathematics as evidenced by its large share of Fields medals (13/66 versus 15⁄66 for the US). I observe in passing that your own research program is far more akin to academic math than cancer research.
As for the position that you’d rather have no funding as opposed to public funding is … well let us be polite and call it … American.
(Probably not going to respond further here, but I wanted to note that this comment really hit the perfect amount of sarcasm and combativity for me personally; I enjoyed it.)
Alignment is almost exactly opposite of abstract math? Math has a good quality of being checkable—you can get a paper, follow all its content and become sure that content is valid. Alignment research paper can have valid math, but be inadequate in questions such as “is this math even related to reality?”, which are much harder to check.
That may be so.
Wentworths own work is closest to academic math/theoretical physics, perhaps to philosophy.
are you claiming we have no way of telling good (alignment) research from bad? And if we do, why would private funding be better at figuring this out than public funding?
To be somewhat more fair, there are probably thousands of problems with the property that they are much easier to check than they are to solve, and while alignment research is maybe not this, I do think that there’s a general gap between verifying a solution and actually solving the problem.
The canonical examples are NP problems.
Another interesting class are problems that are easy to generate but hard to verify.
John Wentworth told me the following delightfully simple example Generating a Turing machine program that halts is easy, verifying that an arbitrary TM program halts is undecidable.
Yep, I was thinking about NP problems, though #P problems for the counting version would count as well.