Set a Yoda Timer and share the most important idea you haven’t had time to express. Five minutes is all you get.
I really think that a lot of modern AI alignment research is being done within the academic system, but because it’s done within the academic system it’s fairly ignored by the independent/dedicated nonprofit research community when compared to independent/dedicated nonprofit research. On the contrary, it likely gets much more attention within academia.
I don’t think the dynamic here is “each team likes their own people best.” I think it’s due to an unwarranted degree of skepticism of the academic system, which may be warranted in non-emergencies but which is less warranted when facing truly apocalyptic threats. The academic system has produced a lot of valuable research on climate change and nuclear risks, and I’d expect its research on AI to be similar broadly speaking.
The fact that the first few successful researchers weren’t academics isn’t really a point against the academic system here, any more than Priestley and Lavoisier not being academics is a point against academic chemists. The supposed pre-paradigmaticity of the field also isn’t really a point against the academic system here, given that many protosciences (e.g. Freudian psychoanalysis) were able to grow into sciences within the academic system, and this pattern can be seen continuing in fields such as astrobiology.
I really think that a lot of modern AI alignment research is being done within the academic system, but because it’s done within the academic system it’s fairly ignored by the independent/dedicated nonprofit research community when compared to independent/dedicated nonprofit research. On the contrary, it likely gets much more attention within academia.
I don’t think the dynamic here is “each team likes their own people best.” I think it’s due to an unwarranted degree of skepticism of the academic system, which may be warranted in non-emergencies but which is less warranted when facing truly apocalyptic threats. The academic system has produced a lot of valuable research on climate change and nuclear risks, and I’d expect its research on AI to be similar broadly speaking.
The fact that the first few successful researchers weren’t academics isn’t really a point against the academic system here, any more than Priestley and Lavoisier not being academics is a point against academic chemists. The supposed pre-paradigmaticity of the field also isn’t really a point against the academic system here, given that many protosciences (e.g. Freudian psychoanalysis) were able to grow into sciences within the academic system, and this pattern can be seen continuing in fields such as astrobiology.