I think one issue is that someone can be aware about a specific worldview’s existence and even consider it a plausible worldview, but still be quite bad at understanding what it would imply/look like in practice if it were true.
For me personally, it’s not that I explicitly singled out the scenario that happened and assigned it some very low probability. Instead, I think I mostly just thought about scenarios that all start from different assumptions, and that was that.
For instance, when reading Paul’s “What failure looks like” (which I had done multiple times), I thought I understood the scenario and even explicitly assigned it significant likelihood, but as it turned out, I didn’t really understand it because I never really thought in detail about “how do we get from a 2021 world (before chat-gpt) to something like the world when things go off the rails in Paul’s description?” If I had asked myself that question, I’d probably have realized that his worldview implies that there probably isn’t a clear-cut moment of “we built the first AGI!” where AI boxing has relevance.
I did have some probability mass on AI boxing being relevant. And I still have some probability mass that there will be sudden recursive self-improvement. But I also had significant probability mass on AI being economically important, and therefore very visible. And with an acceleration of progress, I thought many people would be concerned about it. I don’t know as I would’ve predicted a particular chat-gpt moment (I probably would have guessed some large AI accident), but the point is that we should have been ready for a case when the public/governments became concerned about AI. I think the fact that there were some AI governance efforts before chat-gpt was due in large part to the people saying there could be slow take off, like Paul.
I’m surprised no one has mentioned Paul’s long support (e.g.) of continuous progress meaning slow takeoff. Of course there’s Hanson as well.
I think one issue is that someone can be aware about a specific worldview’s existence and even consider it a plausible worldview, but still be quite bad at understanding what it would imply/look like in practice if it were true.
For me personally, it’s not that I explicitly singled out the scenario that happened and assigned it some very low probability. Instead, I think I mostly just thought about scenarios that all start from different assumptions, and that was that.
For instance, when reading Paul’s “What failure looks like” (which I had done multiple times), I thought I understood the scenario and even explicitly assigned it significant likelihood, but as it turned out, I didn’t really understand it because I never really thought in detail about “how do we get from a 2021 world (before chat-gpt) to something like the world when things go off the rails in Paul’s description?” If I had asked myself that question, I’d probably have realized that his worldview implies that there probably isn’t a clear-cut moment of “we built the first AGI!” where AI boxing has relevance.
I did have some probability mass on AI boxing being relevant. And I still have some probability mass that there will be sudden recursive self-improvement. But I also had significant probability mass on AI being economically important, and therefore very visible. And with an acceleration of progress, I thought many people would be concerned about it. I don’t know as I would’ve predicted a particular chat-gpt moment (I probably would have guessed some large AI accident), but the point is that we should have been ready for a case when the public/governments became concerned about AI. I think the fact that there were some AI governance efforts before chat-gpt was due in large part to the people saying there could be slow take off, like Paul.
I assumed somebody had. Maybe everyone did haha