I think this is a reasonable direction for hope. But details matter. In a lot of likely-looking medium takeoff scenarios, you don’t have an aligned ASI; you’ve got many aligned AGIs around and increasingly above the human level of intelligence. If the damage is done by the time they’re superintelligent, we may not get the help we need in time.
My own hope is that we do not get massive proliferation of AGI, because some sufficiently powerful coalition of governments steps in and says “hey um perhaps we shouldn’t replace all of human labor all at once, not to mention maybe not keep making and distributing AGIs until we get a misaligned recursively self-improving one”—possibly because their weakly superhuman AGI suggested they might want to do that.
Or that a bunch of smart people on LW and elsewhere gave thoroughly debated the issue before then and determined that proliferating AGIs leads to short term economic doom even more certainly than long term existential doom for humanity....
In the medium takeoff scenario, my plan would be to achieve basically what I wanted to achieve in my original scenario but with quantity instead of quality. As soon as we get weakly superhuman AGI, we can try throwing 1000 of them at this task. Assuming they are better than humans at intellectual tasks, I would think that having 1000 genius researchers work on an issue with very good coordination 24⁄7 gives us pretty good chances. The main bottleneck here is how much energy each one of them consumes. I am decently confident that we can afford at least 1000, but probably much more, which also somewhat boosts our chances.
I think this is a reasonable direction for hope. But details matter. In a lot of likely-looking medium takeoff scenarios, you don’t have an aligned ASI; you’ve got many aligned AGIs around and increasingly above the human level of intelligence. If the damage is done by the time they’re superintelligent, we may not get the help we need in time.
My own hope is that we do not get massive proliferation of AGI, because some sufficiently powerful coalition of governments steps in and says “hey um perhaps we shouldn’t replace all of human labor all at once, not to mention maybe not keep making and distributing AGIs until we get a misaligned recursively self-improving one”—possibly because their weakly superhuman AGI suggested they might want to do that.
Or that a bunch of smart people on LW and elsewhere gave thoroughly debated the issue before then and determined that proliferating AGIs leads to short term economic doom even more certainly than long term existential doom for humanity....
Alright, that’s fair enough.
In the medium takeoff scenario, my plan would be to achieve basically what I wanted to achieve in my original scenario but with quantity instead of quality. As soon as we get weakly superhuman AGI, we can try throwing 1000 of them at this task. Assuming they are better than humans at intellectual tasks, I would think that having 1000 genius researchers work on an issue with very good coordination 24⁄7 gives us pretty good chances. The main bottleneck here is how much energy each one of them consumes. I am decently confident that we can afford at least 1000, but probably much more, which also somewhat boosts our chances.