My guess is we maybe could have also done that at least a year earlier, and honestly I think given the traction we had in 2015 on a lot of this stuff, with Bill Gates and Elon Musk and Demis, I think there is a decent chance we could have also done a lot of Overton window shifting back then, and us not having done so is I think downstream of a strategy that wanted to maintain lots of social capital with the AI capability companies and random people in governments who would be weirded out by people saying things outside of the Overton window.
Though again, this is just one story, and I also have other stories where it all depended on Chat-GPT and GPT-4 and before then you would have been laughed out of the room if you had brought up any of this stuff (though I do really think the 2015 Superintelligence stuff is decent evidence against that). It’s also plausible to me that you need a balance of inside and outside game stuff, and that we’ve struck a decent balance, and that yeah, having inside and outside game means there will be conflict between the different people involved in the different games, but it’s ultimately the right call in the end.
I really want an analysis of this. The alignment and rationality communities were wrong about how tractable getting public & governmental buy-in to AI x-risk would be. But what exactly was the failure? That seems quite important to knowing how to alter decision making and to prevent future failures to grab low-hanging fruit.
I tried writing a fault analysis myself, but I couldn’t make much progress and it seems like you more detailed models than I do. So someone other than me is probably the right person for this.
That said, the dialogues on AI governance and outreach are providing some of what I’m looking for here, and seem useful to anyone who does want to write an analysis. So thank you to everyone who’s discussing these topics in public.
I really want an analysis of this. The alignment and rationality communities were wrong about how tractable getting public & governmental buy-in to AI x-risk would be. But what exactly was the failure? That seems quite important to knowing how to alter decision making and to prevent future failures to grab low-hanging fruit.
I tried writing a fault analysis myself, but I couldn’t make much progress and it seems like you more detailed models than I do. So someone other than me is probably the right person for this.
That said, the dialogues on AI governance and outreach are providing some of what I’m looking for here, and seem useful to anyone who does want to write an analysis. So thank you to everyone who’s discussing these topics in public.