Why was the AI Alignment community so unprepared for engaging with the wider world when the moment finally came?
I reject the premise. Actually, I think public communication has gone pretty dang well since ChatGPT. Not only has AI existential risk become a mainstream, semi-respectable concern (especially among top AI researchers and labs, which count the most!), but this is obviously because of the 20 years of groundwork the rationality and EA communities have laid down.
We had well-funded organizations like CAIS able to get credible mainstream signatories. We’ve had lots and lots of favorable or at least sympathetic articles in basically every mainstream Western newspaper. Public polling shows that average people are broadly responsive. The UK is funding real AI safety to the tune of millions of dollars. And all this is despite the immediately-preceding public relations catastrophe of FTX!
The only perspective from which you can say there’s been utter failure is the Yudkowskian one, where the lack of momentum toward strict international treaties runs spells doom. I grant that this is a reasonable position, but it’s not the majority one in the community, so it’s hardly a community-wide failure for that not to happen. (And I believe it is a victory of sorts that it’s gotten into the Overton window at all.)
The UK funding is far and away the biggest win to date, no doubt.
And all this is despite the immediately-preceding public relations catastrophe of FTX!
Do you feel that FTX/EA is that closely tied in the public mind and was a major setback for AI alignment? That is not my model at all.
We all know they are inextricably tied, but I suspect if you were to ask they very people in those same polls if they knew that SBF supported AI risk research they wouldn’t know or care.
I don’t think they’re closely tied in the public mind, but I do think the connection is known to the organs of media and government that interact with AI alignment. It comes up often enough, in the background—details like FTX having a large stake in Anthropic, for example. And the opponents of AI x-risk and EA certainly try to bring it up as often as possible.
Basically, my model is that FTX seriously undermined the insider credibility of AINotKillEveryoneIsm’s most institutionally powerful proponents, but the remaining credibility was enough to work with.
The only perspective from which you can say there’s been utter failure is the Yudkowskian one, where the lack of momentum toward strict international treaties runs spells doom.
He hasn’t just failed to do anything, he is having a lot of trouble even communicating his point,
I reject the premise. Actually, I think public communication has gone pretty dang well since ChatGPT. Not only has AI existential risk become a mainstream, semi-respectable concern (especially among top AI researchers and labs, which count the most!), but this is obviously because of the 20 years of groundwork the rationality and EA communities have laid down.
We had well-funded organizations like CAIS able to get credible mainstream signatories. We’ve had lots and lots of favorable or at least sympathetic articles in basically every mainstream Western newspaper. Public polling shows that average people are broadly responsive. The UK is funding real AI safety to the tune of millions of dollars. And all this is despite the immediately-preceding public relations catastrophe of FTX!
The only perspective from which you can say there’s been utter failure is the Yudkowskian one, where the lack of momentum toward strict international treaties runs spells doom. I grant that this is a reasonable position, but it’s not the majority one in the community, so it’s hardly a community-wide failure for that not to happen. (And I believe it is a victory of sorts that it’s gotten into the Overton window at all.)
The UK funding is far and away the biggest win to date, no doubt.
Do you feel that FTX/EA is that closely tied in the public mind and was a major setback for AI alignment? That is not my model at all.
We all know they are inextricably tied, but I suspect if you were to ask they very people in those same polls if they knew that SBF supported AI risk research they wouldn’t know or care.
I don’t think they’re closely tied in the public mind, but I do think the connection is known to the organs of media and government that interact with AI alignment. It comes up often enough, in the background—details like FTX having a large stake in Anthropic, for example. And the opponents of AI x-risk and EA certainly try to bring it up as often as possible.
Basically, my model is that FTX seriously undermined the insider credibility of AINotKillEveryoneIsm’s most institutionally powerful proponents, but the remaining credibility was enough to work with.
He hasn’t just failed to do anything, he is having a lot of trouble even communicating his point,