Seems overstated. Universities support all kinds of very specialized long-term research that politicians don’t understand.
From my own observations and from talking with funders themselves most funding decisions in AI safety are made on mostly superficial markers—grantmakers on the whole don’t dive deep on technical details. [In fact, I would argue that blindly spraying around money in a more egalitarian way (i.e. what SeriMATS has accomplished) is probably not much worse than the status-quo.]
Academia isn’t perfect but on the whole it gives a lot of bright people the time, space and financial flexibility to pursue their own judgement. In fact, many alignment researchers have done a significant part of work in an academic setting or being supported in some ways by public funding.
Seems overstated. Universities support all kinds of very specialized long-term research that politicians don’t understand.
From my own observations and from talking with funders themselves most funding decisions in AI safety are made on mostly superficial markers—grantmakers on the whole don’t dive deep on technical details. [In fact, I would argue that blindly spraying around money in a more egalitarian way (i.e. what SeriMATS has accomplished) is probably not much worse than the status-quo.]
Academia isn’t perfect but on the whole it gives a lot of bright people the time, space and financial flexibility to pursue their own judgement. In fact, many alignment researchers have done a significant part of work in an academic setting or being supported in some ways by public funding.