Now that the idea of AI Existential Risk has gained more acceptability, I am surprised that the thought leaders are not more often mentioned and contacted by AI executives and researchers in the cutting edge (OpenAI, Anthropic, etc.), recognized academic figures (Bengio, Hinton, Hofstadter, etc.), journalists, or political leaders who have expressed an interest.
To be sure, they are mentioned in articles; MIRI was represented in a congressional hearing; and Mira Murati reached out to Eliezer. But still, it seems that the profile of the pioneers is much lower than I’d expect.
Fifteen years ago, we could have said that AI XRisk was treated as a crackpot idea and that MIRI in particular might be ignored, as it operates outside a standard framework like a university. But today, not only have the ideas spread, but many top AI-capabilities researchers have, I think, entered the field specifically because of inspiration from the MIRI/LessWrong circle.
Though some journalists might still be hung up on MIRI’s lack of social status markers, I don’t think that many others, including cutting-edge AI researchers, are.
So what is going on?
I think Bostrom isn’t necessarily that interested in currently working on AI x-Risk as opposed to other topics (see his recent work here for example). This seems pretty reasonable from my perspective, I think his comparative advantage plausibly lies elsewhere at this point.
As far as Yudkowsky, I think he’s often considered (by people at AI labs and in AI governance) to have incorrect views and to not be a “thought leader” with good views on AI x-Risk. He is also maybe just somewhat annoying to work with and interact with. (A roughly similar story probably applies for Nate Soares.)
Probably part of the reason AI labs don’t pay attention to Yudkowsky is that he is clearly advocating for a total stop to frontier AI development which might make AI labs (the people doing frontier AI development) less likely to pay attention to him.
(My personal take is that Yudkowsky doesn’t really have very good views on the topics of what AI governance should do at the margin or what AI labs should do at the margin and thus the situation seems acceptable. That said, it seems like an obviously a bad dynamic if (public) criticism of AI labs makes them not want to interact with you.)
I have heard that elsewhere as well. Still, I don’t really see that myself, whether in his public posting or in my limited interactions with him. He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn’t think that people were that delicate.
True, but I had thought better of those people. I would have thought that they could take criticism, especially from someone who inspired some of them into their career.
Thank you for your answers. Overall, I think you are right, though I don’t quite understand.
You may wish to update on this. I’ve only exchange a few words with one of the name, but that was enough to make clear he doesn’t bother being respectful. That may work in some non delicate research environment I don’t want to know about, but most bright academic I know like to have fun at work, and would leave any non delicate work environment (unless they make their personal duty to clean the place).
Just slight pushback here to say that in some circles they are getting a lot more attention, just not necessarily in public from people in leadership positions. Neither of them are especially respectable for various reasons, and so leaders don’t want to associate with them too much, though we don’t know if they are paying attention to them in private.