This seems doubtful to me; if Yan truly believed that AI was an imminent extinction risk, or even thought it was credible, what would Yann be hoping to do or gain by ridiculing people who are similarly worried?
It often crosses my mind that public discourse about AI safety might not be useful. Tell men that AGI is powerful and they’ll start trying harder to acquire it. Tell legislators and, perhaps Yann thinks they’ll just start an arms race and complicate the work and not do much else.
I could imagine someone suppressing their alignment fears temporarily, to work their way up to a position of power in a capabilities lab and then steer outcomes from there.
But that doesn’t seem to work, since:
The top AI capabilities labs (OpenAI, DeepMind, Anthropic) are more vocal about capabilities. Meta AI is a follow-the-leader lab anyway.
I don’t think “bringing up concerns later, instead of now” is a strategically great way to do this. I don’t know a ton about the politics of historical programs for e.g. atomic weapons and bioweapons. But based on my cursory knowledge, I don’t think “be worried in secret” is anything like a slam-dunk for those situations.
Yann, specifically, is already the Chief AI Person at Meta/Facebook! Unless Meta is really quick to fire people (or Yann is angling for Zuckerberg’s position), what more career capital could he gain at this stage?
This seems doubtful to me; if Yan truly believed that AI was an imminent extinction risk, or even thought it was credible, what would Yann be hoping to do or gain by ridiculing people who are similarly worried?
It often crosses my mind that public discourse about AI safety might not be useful. Tell men that AGI is powerful and they’ll start trying harder to acquire it. Tell legislators and, perhaps Yann thinks they’ll just start an arms race and complicate the work and not do much else.
I wonder if that’s what he’s thinking.
That’s also my confusion, yes.
I could imagine someone suppressing their alignment fears temporarily, to work their way up to a position of power in a capabilities lab and then steer outcomes from there.
But that doesn’t seem to work, since:
The top AI capabilities labs (OpenAI, DeepMind, Anthropic) are more vocal about capabilities. Meta AI is a follow-the-leader lab anyway.
I don’t think “bringing up concerns later, instead of now” is a strategically great way to do this. I don’t know a ton about the politics of historical programs for e.g. atomic weapons and bioweapons. But based on my cursory knowledge, I don’t think “be worried in secret” is anything like a slam-dunk for those situations.
Yann, specifically, is already the Chief AI Person at Meta/Facebook! Unless Meta is really quick to fire people (or Yann is angling for Zuckerberg’s position), what more career capital could he gain at this stage?