Agreed. Thank you for writing this post. Some thoughts:
As somebody strongly on the Agent Foundations train it puzzles me that there is so little activity outside MIRI itself. We are being told there are almost limitless financial resources, yet—as you explain clearly—it is very hard for people to engage with the material outside of LW.
At the last EA global there was some sort of AI safety breakout session. There were ~12 tables with different topics. I was dismayed to discover that almost every table was full with people excitingly discussing various topics in prosaic AI alignment and other things the AF table had just 2 (!) people.
In general, MIRI has a rather insular view of itself. Some of it is justified. I do think they have done most of the interesting research, are well-aligned, employ many of the smartest & creative people etc.
But the world is very very big.
I have spoken with MIRI people arguing for the need to establish something like a PhD apprentice-style system. Not much interest.
Just some sort of official & long-term& OFFLINE study program that would teach some of the previous published MIRI research would be hugely beneficial for growing the AF community.
Finally, there needs to be way more interaction with existing academia. There are plenty of very smart very capable people in academica that do interesting things with Solomonoff induction, with Cartesian Frames (but they call them Chu Spaces), with Pearlian causal inference, with decision theory, with computational complexity & interactive proof systems, with post-Bayesian probability theory etc etc. For many in academia AGI safety is still seen as silly, but that could change if MIRI and Agent Foundations people would be able to engage seriously with academia.
One idea could be to organize sabbaticals for prominent academics + scholarships for young people. This seems to have happened with the prosaic AI alignment field but not with AF.
Just some sort of official & long-term& OFFLINE study program that would teach some of the previous published MIRI research would be hugely beneficial for growing the AF community.
Agreed.
At the last EA global there was some sort of AI safety breakout session. There were ~12 tables with different topics. I was dismayed to discover that almost every table was full with people excitingly discussing various topics in prosaic AI alignment and other things the AF table had just 2 (!) people.
Wow, didn’t realise it was that little!
I have spoken with MIRI people arguing for the need to establish something like a PhD apprentice-style system. Not much interest.
a feeling that they have the smartest people anyway, only hire the elite few that have a proven track record
feeling that it would take too much time and energy to educate people
a lack of organisational energy
.…
It would be great if somebody from MIRI could chime in.
I might add that I know a number of people interested in AF who feel somewhat afloat/find it difficult to contribute. Feels a bit like a waste of talent
Agreed. Thank you for writing this post. Some thoughts:
As somebody strongly on the Agent Foundations train it puzzles me that there is so little activity outside MIRI itself. We are being told there are almost limitless financial resources, yet—as you explain clearly—it is very hard for people to engage with the material outside of LW.
At the last EA global there was some sort of AI safety breakout session. There were ~12 tables with different topics. I was dismayed to discover that almost every table was full with people excitingly discussing various topics in prosaic AI alignment and other things the AF table had just 2 (!) people.
In general, MIRI has a rather insular view of itself. Some of it is justified. I do think they have done most of the interesting research, are well-aligned, employ many of the smartest & creative people etc.
But the world is very very big.
I have spoken with MIRI people arguing for the need to establish something like a PhD apprentice-style system. Not much interest.
Just some sort of official & long-term& OFFLINE study program that would teach some of the previous published MIRI research would be hugely beneficial for growing the AF community.
Finally, there needs to be way more interaction with existing academia. There are plenty of very smart very capable people in academica that do interesting things with Solomonoff induction, with Cartesian Frames (but they call them Chu Spaces), with Pearlian causal inference, with decision theory, with computational complexity & interactive proof systems, with post-Bayesian probability theory etc etc. For many in academia AGI safety is still seen as silly, but that could change if MIRI and Agent Foundations people would be able to engage seriously with academia.
One idea could be to organize sabbaticals for prominent academics + scholarships for young people. This seems to have happened with the prosaic AI alignment field but not with AF.
Agreed.
Wow, didn’t realise it was that little!
Do you know why they weren’t interested?
Unclear. Some things that might be involved
a somewhat anti/non academic vibe
a feeling that they have the smartest people anyway, only hire the elite few that have a proven track record
feeling that it would take too much time and energy to educate people
a lack of organisational energy
.… It would be great if somebody from MIRI could chime in.
I might add that I know a number of people interested in AF who feel somewhat afloat/find it difficult to contribute. Feels a bit like a waste of talent