The reason for this is comparative advantage. Low skilled laborers are worse than high skilled ones at all tasks. But high-skilled laborers face constraints: if they work on an assembly line, they can’t also teach at the university …
… This applies just as strongly to human level AGIs. They would face very different constraints than human geniuses, but they would still face constraints.
It’s reasonable to expect that the strength of these constraints matters when it comes to the size of the effect on low-skilled workers. For high-skilled humans, the constraint you describe is quite strong. There’s only one of them, and only so many hours in the day, and training a replacement is very resource-intensive. For high ability AIs, the constraint is much weaker. The cost of doing one more thing is just the cost of a bit more compute/power to run another AI copy on.
If a given worker has an absolute disadvantage vs AI in all tasks, then the wage they would get should be determined by the marginal cost of making an AI copy (plus whatever other robotics etc necessary) instead. So that’s getting a bit more compute (maybe not very much once we account for efficiency gains in inference compute) and a bit more power to run the compute. This doesn’t sound like much of a living wage, unless ALL necessities get a lot cheaper. Maybe robots are a bit more expensive, so low wage workers could eke out some pittance being the hands for an AI overseer.
That said, these sorts of Baumol effect arguments will likely continue to hold as AI replaces a large and growing fraction of tasks. “Capital replaces a lot of tasks” and “capital replaces all tasks” are qualitatively different stories.
It’s reasonable to expect that the strength of these constraints matters when it comes to the size of the effect on low-skilled workers. For high-skilled humans, the constraint you describe is quite strong. There’s only one of them, and only so many hours in the day, and training a replacement is very resource-intensive. For high ability AIs, the constraint is much weaker. The cost of doing one more thing is just the cost of a bit more compute/power to run another AI copy on.
If a given worker has an absolute disadvantage vs AI in all tasks, then the wage they would get should be determined by the marginal cost of making an AI copy (plus whatever other robotics etc necessary) instead. So that’s getting a bit more compute (maybe not very much once we account for efficiency gains in inference compute) and a bit more power to run the compute. This doesn’t sound like much of a living wage, unless ALL necessities get a lot cheaper. Maybe robots are a bit more expensive, so low wage workers could eke out some pittance being the hands for an AI overseer.
That said, these sorts of Baumol effect arguments will likely continue to hold as AI replaces a large and growing fraction of tasks. “Capital replaces a lot of tasks” and “capital replaces all tasks” are qualitatively different stories.