AI research seems to be intelligence constrained. Very few of those capable of making a contribution are aware of the problem, or find it interesting. The Berkeley-MIRI seminar has increased the pool of those aware of the problem, but the total number of AI safety researchers remain small.
The goal of MIRI isn’t to increase the speed of AI research but to increase FAI research. Speeding up AI research on the whole would likely increase Xrisk instead of decreasing it.
The goal of MIRI isn’t to increase the speed of AI research but to increase FAI research. Speeding up AI research on the whole would likely increase Xrisk instead of decreasing it.