Here is my metastrategy for transitioning to post-ASI economy:
If we have an aligned ASI (which you’ve granted), we can just ask it what’s the best way for humanity to transition to post-ASI economy. Given that ASI will be vastly smarter than any of the alive humans at the time it could probably come up with a solution that’s better than anything we currently have.
I should have included this in my list from the start. I basically agree with @Seth Herd that this is a promising direction but I’m concerned about the damage that could occur during takeoff, which could be a years-long period.
That’s certainly a fair concern. The worst case scenario is where we have AGI that can displace human labour, but which can’t solve economics, and a slow takeoff.
Here are some of the things that work in our favor in that scenario:
Companies turned out to replace human workers much slower than I expected. This is purely anecdotal, but there are low-level jobs at my work which could be almost fully automated with just the technologies that we have now. But they still haven’t been, mostly, I suspect, because of the convenience of relying on humans.
Under slow takeoff, jobs would mostly be replaced in groups, not all at once. For example, ChatGPT put heavy pressure on copywriters. After no longer being able to work as copywriters, some of them relocated to other jobs. So far, the effect was local and with slow takeoff, chances are the trend will continue.
Robotics are advancing much slower and much less dramatically than LLMs. If you are a former copywriter who is jobless, fields that require robotic work should be safe for at least some time.
“We’ve always managed in the past. Take the industrial revolution for example. People stop doing the work that’s been automated and find new, usually better-compensated work to do.” This argument is now back to working because we are talking about an AI that for the time being is clearly not better than humans at everything.
Even an AI that can’t by itself solve economics, can help economists with their job. By the time it becomes relevant, AI would be better than what we have now. I am especially excited about its use as a quick lookup tool for specific information that’s tricky to google.
Slow takeoff means economists and people on LessWrong have more time to think about solving post-ASI economics. We’ve came a long way since 2022 (when it all arguably blew up). And it has been just 2 years.
Slow takeoff also means that governments have more time to wake up to the potential economical problems that we might face as the AI gets better and better.
I think this is a reasonable direction for hope. But details matter. In a lot of likely-looking medium takeoff scenarios, you don’t have an aligned ASI; you’ve got many aligned AGIs around and increasingly above the human level of intelligence. If the damage is done by the time they’re superintelligent, we may not get the help we need in time.
My own hope is that we do not get massive proliferation of AGI, because some sufficiently powerful coalition of governments steps in and says “hey um perhaps we shouldn’t replace all of human labor all at once, not to mention maybe not keep making and distributing AGIs until we get a misaligned recursively self-improving one”—possibly because their weakly superhuman AGI suggested they might want to do that.
Or that a bunch of smart people on LW and elsewhere gave thoroughly debated the issue before then and determined that proliferating AGIs leads to short term economic doom even more certainly than long term existential doom for humanity....
In the medium takeoff scenario, my plan would be to achieve basically what I wanted to achieve in my original scenario but with quantity instead of quality. As soon as we get weakly superhuman AGI, we can try throwing 1000 of them at this task. Assuming they are better than humans at intellectual tasks, I would think that having 1000 genius researchers work on an issue with very good coordination 24⁄7 gives us pretty good chances. The main bottleneck here is how much energy each one of them consumes. I am decently confident that we can afford at least 1000, but probably much more, which also somewhat boosts our chances.
Here is my metastrategy for transitioning to post-ASI economy:
If we have an aligned ASI (which you’ve granted), we can just ask it what’s the best way for humanity to transition to post-ASI economy. Given that ASI will be vastly smarter than any of the alive humans at the time it could probably come up with a solution that’s better than anything we currently have.
I should have included this in my list from the start. I basically agree with @Seth Herd that this is a promising direction but I’m concerned about the damage that could occur during takeoff, which could be a years-long period.
That’s certainly a fair concern. The worst case scenario is where we have AGI that can displace human labour, but which can’t solve economics, and a slow takeoff.
Here are some of the things that work in our favor in that scenario:
Companies turned out to replace human workers much slower than I expected. This is purely anecdotal, but there are low-level jobs at my work which could be almost fully automated with just the technologies that we have now. But they still haven’t been, mostly, I suspect, because of the convenience of relying on humans.
Under slow takeoff, jobs would mostly be replaced in groups, not all at once. For example, ChatGPT put heavy pressure on copywriters. After no longer being able to work as copywriters, some of them relocated to other jobs. So far, the effect was local and with slow takeoff, chances are the trend will continue.
Robotics are advancing much slower and much less dramatically than LLMs. If you are a former copywriter who is jobless, fields that require robotic work should be safe for at least some time.
“We’ve always managed in the past. Take the industrial revolution for example. People stop doing the work that’s been automated and find new, usually better-compensated work to do.” This argument is now back to working because we are talking about an AI that for the time being is clearly not better than humans at everything.
Even an AI that can’t by itself solve economics, can help economists with their job. By the time it becomes relevant, AI would be better than what we have now. I am especially excited about its use as a quick lookup tool for specific information that’s tricky to google.
Slow takeoff means economists and people on LessWrong have more time to think about solving post-ASI economics. We’ve came a long way since 2022 (when it all arguably blew up). And it has been just 2 years.
Slow takeoff also means that governments have more time to wake up to the potential economical problems that we might face as the AI gets better and better.
I think this is a reasonable direction for hope. But details matter. In a lot of likely-looking medium takeoff scenarios, you don’t have an aligned ASI; you’ve got many aligned AGIs around and increasingly above the human level of intelligence. If the damage is done by the time they’re superintelligent, we may not get the help we need in time.
My own hope is that we do not get massive proliferation of AGI, because some sufficiently powerful coalition of governments steps in and says “hey um perhaps we shouldn’t replace all of human labor all at once, not to mention maybe not keep making and distributing AGIs until we get a misaligned recursively self-improving one”—possibly because their weakly superhuman AGI suggested they might want to do that.
Or that a bunch of smart people on LW and elsewhere gave thoroughly debated the issue before then and determined that proliferating AGIs leads to short term economic doom even more certainly than long term existential doom for humanity....
Alright, that’s fair enough.
In the medium takeoff scenario, my plan would be to achieve basically what I wanted to achieve in my original scenario but with quantity instead of quality. As soon as we get weakly superhuman AGI, we can try throwing 1000 of them at this task. Assuming they are better than humans at intellectual tasks, I would think that having 1000 genius researchers work on an issue with very good coordination 24⁄7 gives us pretty good chances. The main bottleneck here is how much energy each one of them consumes. I am decently confident that we can afford at least 1000, but probably much more, which also somewhat boosts our chances.