If progress in AI is continuous, we should expect record levels of employment. Not the opposite.
My mentality is if progress in AI doesn’t have a sudden, foom-level jump, and if we all don’t die, most of the fears of human unemployment are unfounded… at least for a while. Say we get AIs that can replace 90% of the workforce. The productivity surge from this should dramatically boost the economy, creating more companies, more trading, and more jobs. Since AIs can be copied, they would be cheap, abundant labor. This means anything a human can do that an AI still can’t becomes a scarce, highly valued resource. Companies with thousands or millions of AI instances working for them would likely compete for human labor, because making more humans takes much longer than making more AIs. Then say, after a few years, AIs are able to automate 90% of the remaining 10%. Then that creates even more productivity, more economic growth, and even more jobs. This could continue for even a few decades. Eventually, humans will be rendered completely obsolete, but by that point (most) of them might be so filthy rich that they won’t especially care.
This doesn’t mean it’ll all be smooth-sailing or that humans will be totally happy with this shift. Some people probably won’t enjoy having to switch to a new career, only for that new career to be automated away after a few years, and then have to switch again. This will probably be especially true for people who are older, those who have families, want a stable and certain future, etc. None of this will be made easier by the fact it’ll probably be hard to tell when true human obsolescence is on the horizon, so some might be in a state of perpetual anxiety, and others will be in constant denial.
The inverse argument I have seen on reddit happens if you try to examine how these ai models might work and learn.
One method is to use a large benchmark of tasks, where model capabilities is measured as the weighted harmonic mean of all tasks.
As the models run, much of the information gained doing real world tasks is added as training and test tasks to the benchmark suite. (You do this whenever a chat task has an output that can be objectively checked, and for robotic tasks you run in lockstep a neural sim similar to Sora that makes testable predictions for future real world input sets)
What this means is most models learn from millions of parallel instances of themselves and other models.
This means the more models are deployed in the world—the more labor is automated—the more this learning mechanism gets debugged, and the faster models learn, and so on.
There are also all kinds of parallel task gains. For example once models have experience working on maintaining the equipment in a coke can factory, and an auto plant, and a 3d printer plant, this variety of tasks with common elements should cause new models trained in sim to gain “general maintenance” skills at least for machines that are similar to the 3 given. (The “skill” is developing a common policy network that compresses the 3 similar policies down to 1 policy on the new version of the network)
With each following task, the delta—the skills the AI system needs to learn it doesn’t already know—shrinks. This shrinking learning requirement likely increases faster than the task difficulty increases. (Since the most difficult tasks is still doable by a human, and also the AI system is able to cheat a bunch of ways. For example using better actuators to make skilled manual trades easy, or software helpers to best champion Olympiad contestants)
You have to then look at what barriers there are to AI doing a given task to decide what tasks are protected for a while.
Things that just require a human body to do:
Medical test subject.
Food taster, perfume evaluator, fashion or aesthetics evaluator.
Various kinds of personal service worker.
AI Supervisor roles:
Arguably checking that the models haven’t betrayed us yet, and sanity checking plans and outputs seem like this would be a massive source of employment.
AI developer roles :
the risks mean that some humans need to have a deep understanding of how the current gen of AI works, and the tools and time to examine what happened during a failure. Someone like this needs to be skeptical of an explanation by another ai system for the obvious reasons.
Government/old institution roles:
Institutions that don’t value making a profit may continue using human staff for decades after AI can do their jobs, even when it can be shown ai makes less errors and more legally sound decisions.
TLDR: Arguably for the portion of jobs that can be automated, the growth rate should be exponential, from the easiest and most common jobs to the most difficult and unique ones.
There is a portion of tasks that humans are required to do for a while, and a portion where it might be a good idea not to ever automate it.
If progress in AI is continuous, we should expect record levels of employment. Not the opposite.
My mentality is if progress in AI doesn’t have a sudden, foom-level jump, and if we all don’t die, most of the fears of human unemployment are unfounded… at least for a while. Say we get AIs that can replace 90% of the workforce. The productivity surge from this should dramatically boost the economy, creating more companies, more trading, and more jobs. Since AIs can be copied, they would be cheap, abundant labor. This means anything a human can do that an AI still can’t becomes a scarce, highly valued resource. Companies with thousands or millions of AI instances working for them would likely compete for human labor, because making more humans takes much longer than making more AIs. Then say, after a few years, AIs are able to automate 90% of the remaining 10%. Then that creates even more productivity, more economic growth, and even more jobs. This could continue for even a few decades. Eventually, humans will be rendered completely obsolete, but by that point (most) of them might be so filthy rich that they won’t especially care.
This doesn’t mean it’ll all be smooth-sailing or that humans will be totally happy with this shift. Some people probably won’t enjoy having to switch to a new career, only for that new career to be automated away after a few years, and then have to switch again. This will probably be especially true for people who are older, those who have families, want a stable and certain future, etc. None of this will be made easier by the fact it’ll probably be hard to tell when true human obsolescence is on the horizon, so some might be in a state of perpetual anxiety, and others will be in constant denial.
The inverse argument I have seen on reddit happens if you try to examine how these ai models might work and learn.
One method is to use a large benchmark of tasks, where model capabilities is measured as the weighted harmonic mean of all tasks.
As the models run, much of the information gained doing real world tasks is added as training and test tasks to the benchmark suite. (You do this whenever a chat task has an output that can be objectively checked, and for robotic tasks you run in lockstep a neural sim similar to Sora that makes testable predictions for future real world input sets)
What this means is most models learn from millions of parallel instances of themselves and other models.
This means the more models are deployed in the world—the more labor is automated—the more this learning mechanism gets debugged, and the faster models learn, and so on.
There are also all kinds of parallel task gains. For example once models have experience working on maintaining the equipment in a coke can factory, and an auto plant, and a 3d printer plant, this variety of tasks with common elements should cause new models trained in sim to gain “general maintenance” skills at least for machines that are similar to the 3 given. (The “skill” is developing a common policy network that compresses the 3 similar policies down to 1 policy on the new version of the network)
With each following task, the delta—the skills the AI system needs to learn it doesn’t already know—shrinks. This shrinking learning requirement likely increases faster than the task difficulty increases. (Since the most difficult tasks is still doable by a human, and also the AI system is able to cheat a bunch of ways. For example using better actuators to make skilled manual trades easy, or software helpers to best champion Olympiad contestants)
You have to then look at what barriers there are to AI doing a given task to decide what tasks are protected for a while.
Things that just require a human body to do: Medical test subject. Food taster, perfume evaluator, fashion or aesthetics evaluator. Various kinds of personal service worker.
AI Supervisor roles: Arguably checking that the models haven’t betrayed us yet, and sanity checking plans and outputs seem like this would be a massive source of employment.
AI developer roles : the risks mean that some humans need to have a deep understanding of how the current gen of AI works, and the tools and time to examine what happened during a failure. Someone like this needs to be skeptical of an explanation by another ai system for the obvious reasons.
Government/old institution roles: Institutions that don’t value making a profit may continue using human staff for decades after AI can do their jobs, even when it can be shown ai makes less errors and more legally sound decisions.
TLDR: Arguably for the portion of jobs that can be automated, the growth rate should be exponential, from the easiest and most common jobs to the most difficult and unique ones.
There is a portion of tasks that humans are required to do for a while, and a portion where it might be a good idea not to ever automate it.