the issues you mention don’t seem tied to public versus private funding but more about size of funding + an intrinsically difficul scientific question. I agree that at some point more funding doesn’t help. At the moment, that doesn’t seem to be the case in alignment. Indeed, alignment is not even as large in number of researchers as a relatively small field like linguistics.
How well the funders understand the field, and can differentially target more-useful projects, is a key variable here. For public funding, the top-level decision maker is a politician; they will in the vast majority of cases have approximately-zero understanding themselves. They will either apportion funding on purely political grounds (e.g. pork-barrel spending), or defer to whoever the consensus “experts” are in the field (which is where the median researcher problem kicks in).
In alignment to date, the funders have generally been people who understand the problem themselves to at least enough extent to notice that it’s worth paying attention to (in a world where alignment concern wasn’t already mainstream), and can therefore differentially target useful work, rather than blindly spray money around.
Seems overstated. Universities support all kinds of very specialized long-term research that politicians don’t understand.
From my own observations and from talking with funders themselves most funding decisions in AI safety are made on mostly superficial markers—grantmakers on the whole don’t dive deep on technical details. [In fact, I would argue that blindly spraying around money in a more egalitarian way (i.e. what SeriMATS has accomplished) is probably not much worse than the status-quo.]
Academia isn’t perfect but on the whole it gives a lot of bright people the time, space and financial flexibility to pursue their own judgement. In fact, many alignment researchers have done a significant part of work in an academic setting or being supported in some ways by public funding.
Many things here.
the issues you mention don’t seem tied to public versus private funding but more about size of funding + an intrinsically difficul scientific question. I agree that at some point more funding doesn’t help. At the moment, that doesn’t seem to be the case in alignment. Indeed, alignment is not even as large in number of researchers as a relatively small field like linguistics.
How well the funders understand the field, and can differentially target more-useful projects, is a key variable here. For public funding, the top-level decision maker is a politician; they will in the vast majority of cases have approximately-zero understanding themselves. They will either apportion funding on purely political grounds (e.g. pork-barrel spending), or defer to whoever the consensus “experts” are in the field (which is where the median researcher problem kicks in).
In alignment to date, the funders have generally been people who understand the problem themselves to at least enough extent to notice that it’s worth paying attention to (in a world where alignment concern wasn’t already mainstream), and can therefore differentially target useful work, rather than blindly spray money around.
Seems overstated. Universities support all kinds of very specialized long-term research that politicians don’t understand.
From my own observations and from talking with funders themselves most funding decisions in AI safety are made on mostly superficial markers—grantmakers on the whole don’t dive deep on technical details. [In fact, I would argue that blindly spraying around money in a more egalitarian way (i.e. what SeriMATS has accomplished) is probably not much worse than the status-quo.]
Academia isn’t perfect but on the whole it gives a lot of bright people the time, space and financial flexibility to pursue their own judgement. In fact, many alignment researchers have done a significant part of work in an academic setting or being supported in some ways by public funding.