That’s certainly a fair concern. The worst case scenario is where we have AGI that can displace human labour, but which can’t solve economics, and a slow takeoff.
Here are some of the things that work in our favor in that scenario:
Companies turned out to replace human workers much slower than I expected. This is purely anecdotal, but there are low-level jobs at my work which could be almost fully automated with just the technologies that we have now. But they still haven’t been, mostly, I suspect, because of the convenience of relying on humans.
Under slow takeoff, jobs would mostly be replaced in groups, not all at once. For example, ChatGPT put heavy pressure on copywriters. After no longer being able to work as copywriters, some of them relocated to other jobs. So far, the effect was local and with slow takeoff, chances are the trend will continue.
Robotics are advancing much slower and much less dramatically than LLMs. If you are a former copywriter who is jobless, fields that require robotic work should be safe for at least some time.
“We’ve always managed in the past. Take the industrial revolution for example. People stop doing the work that’s been automated and find new, usually better-compensated work to do.” This argument is now back to working because we are talking about an AI that for the time being is clearly not better than humans at everything.
Even an AI that can’t by itself solve economics, can help economists with their job. By the time it becomes relevant, AI would be better than what we have now. I am especially excited about its use as a quick lookup tool for specific information that’s tricky to google.
Slow takeoff means economists and people on LessWrong have more time to think about solving post-ASI economics. We’ve came a long way since 2022 (when it all arguably blew up). And it has been just 2 years.
Slow takeoff also means that governments have more time to wake up to the potential economical problems that we might face as the AI gets better and better.
That’s certainly a fair concern. The worst case scenario is where we have AGI that can displace human labour, but which can’t solve economics, and a slow takeoff.
Here are some of the things that work in our favor in that scenario:
Companies turned out to replace human workers much slower than I expected. This is purely anecdotal, but there are low-level jobs at my work which could be almost fully automated with just the technologies that we have now. But they still haven’t been, mostly, I suspect, because of the convenience of relying on humans.
Under slow takeoff, jobs would mostly be replaced in groups, not all at once. For example, ChatGPT put heavy pressure on copywriters. After no longer being able to work as copywriters, some of them relocated to other jobs. So far, the effect was local and with slow takeoff, chances are the trend will continue.
Robotics are advancing much slower and much less dramatically than LLMs. If you are a former copywriter who is jobless, fields that require robotic work should be safe for at least some time.
“We’ve always managed in the past. Take the industrial revolution for example. People stop doing the work that’s been automated and find new, usually better-compensated work to do.” This argument is now back to working because we are talking about an AI that for the time being is clearly not better than humans at everything.
Even an AI that can’t by itself solve economics, can help economists with their job. By the time it becomes relevant, AI would be better than what we have now. I am especially excited about its use as a quick lookup tool for specific information that’s tricky to google.
Slow takeoff means economists and people on LessWrong have more time to think about solving post-ASI economics. We’ve came a long way since 2022 (when it all arguably blew up). And it has been just 2 years.
Slow takeoff also means that governments have more time to wake up to the potential economical problems that we might face as the AI gets better and better.