Now that the idea of AI Existential Risk has gained more acceptability, I am surprised that the thought leaders are not more often mentioned and contacted by AI executives and researchers in the cutting edge (OpenAI, Anthropic, etc.), recognized academic figures (Bengio, Hinton, Hofstadter, etc.), journalists, or political leaders who have expressed an interest.
To be sure, they are mentioned in articles; MIRI was represented in a congressional hearing; and Mira Murati reached out to Eliezer. But still, it seems that the profile of the pioneers is much lower than I’d expect.
Fifteen years ago, we could have said that AI XRisk was treated as a crackpot idea and that MIRI in particular might be ignored, as it operates outside a standard framework like a university. But today, not only have the ideas spread, but many top AI-capabilities researchers have, I think, entered the field specifically because of inspiration from the MIRI/LessWrong circle.
Though some journalists might still be hung up on MIRI’s lack of social status markers, I don’t think that many others, including cutting-edge AI researchers, are.
[Question] Why aren’t Yudkowsky & Bostrom getting more attention now?
Now that the idea of AI Existential Risk has gained more acceptability, I am surprised that the thought leaders are not more often mentioned and contacted by AI executives and researchers in the cutting edge (OpenAI, Anthropic, etc.), recognized academic figures (Bengio, Hinton, Hofstadter, etc.), journalists, or political leaders who have expressed an interest.
To be sure, they are mentioned in articles; MIRI was represented in a congressional hearing; and Mira Murati reached out to Eliezer. But still, it seems that the profile of the pioneers is much lower than I’d expect.
Fifteen years ago, we could have said that AI XRisk was treated as a crackpot idea and that MIRI in particular might be ignored, as it operates outside a standard framework like a university. But today, not only have the ideas spread, but many top AI-capabilities researchers have, I think, entered the field specifically because of inspiration from the MIRI/LessWrong circle.
Though some journalists might still be hung up on MIRI’s lack of social status markers, I don’t think that many others, including cutting-edge AI researchers, are.
So what is going on?