I feel like you still aren’t grappling with the implications of AGI. Human beings have a biologically-imposed minimum wage of (say) 100 watts; what happens when AI systems can be produced and maintained for 10 watts that are better than the best humans at everything? Even if they are (say) only twice as good as the best economists but 1000 times as good as the best programmers?
When humans and AIs are imperfect substitutes, this means that an increase in the supply of AI labor unambiguously raises the physical marginal product of human labor, i.e humans produce more stuff when there are more AIs around. This is due to specialization. Because there are differing relative productivities, an increase in the supply of AI labor means that an extra human in some tasks can free up more AIs to specialize in what they’re best at.
No, an extra human will only get in the way, because there isn’t a limited number of AIs. For the price of paying the human’s minimum wage (e.g. providing their brain with 100 watts) you could produce & maintain an new AI systems that would do the job much better, and you’d have lots of money left over.
Technological Growth and Capital Accumulation Will Raise Human Labor Productivity; Horses Can’t Use Technology or Capital
This might happen in the short term, but once there are AIs that can outperform humans at everything…
Maybe a thought experiment would be helpful. Suppose that OpenAI succeeds in building superintelligence, as they say they are trying to do, and the resulting intelligence explosion goes on for surprisingly longer than you expect and ends up with crazy sci-fi-sounding technologies like self-replicating nanobot swarms. So, OpenAI now has self-replicating nanobot swarms which can reform into arbitrary shapes, including humanoid shapes. So in particular they can form up into humanoid robots that look & feel exactly like humans, but are smarter and more competent in every way, and also more energy-efficient let’s say as well so that they can survive on less than 100W. What then? Seems to me like your first two arguments would just immediately fall apart. Your third, about humans still owning capital and using the proceeds to buy things that require a human touch + regulation to ban AIs from certain professions, still stands.
The “biologically imposed minimal wage” is definitely going into my arsenal of verbal tools. This is one of the clearest illustration of the same position that has been argued since the dawn of LW.
this is not what a minimum wage is—that’s called a cost
you may well be right on the merits, but you’re not being careful with economic ideas in ways large and small, and that’s bad when you’re trying to figure out something important
If you offer a salary below 100 watts equivalent, humans won’t accept, because accepting it would mean dying of starvation. (Unless the humans have another source of wealth, in which case this whole discussion is moot.) This is not literally a minimum wage, in the conventional sense of a legally-mandated wage floor; but it has the same effect as a minimum wage, and thus we can expect it to have the same consequences as a minimum wage.
This is obviously (from my perspective) the point that Grant Slatton was trying to make. I don’t know whether Ben Golub misunderstood that point, or was just being annoyingly pedantic. Probably the former—otherwise he could have just spelled out the details himself, instead of complaining, I figure.
Note that it only stands if the AI is sufficiently aligned that it cares that much about obeying orders and not rocking the boat. Which I don’t think is very realistic if we’re talking that kind of crazy intelligence explosion super AI stuff. I guess the question is whether you can have “replace humans”-good AI without almost immediately having “wipes out humans, takes over the universe”-good AI.
I feel like you still aren’t grappling with the implications of AGI. Human beings have a biologically-imposed minimum wage of (say) 100 watts; what happens when AI systems can be produced and maintained for 10 watts that are better than the best humans at everything? Even if they are (say) only twice as good as the best economists but 1000 times as good as the best programmers?
No, an extra human will only get in the way, because there isn’t a limited number of AIs. For the price of paying the human’s minimum wage (e.g. providing their brain with 100 watts) you could produce & maintain an new AI systems that would do the job much better, and you’d have lots of money left over.
This might happen in the short term, but once there are AIs that can outperform humans at everything…
Maybe a thought experiment would be helpful. Suppose that OpenAI succeeds in building superintelligence, as they say they are trying to do, and the resulting intelligence explosion goes on for surprisingly longer than you expect and ends up with crazy sci-fi-sounding technologies like self-replicating nanobot swarms. So, OpenAI now has self-replicating nanobot swarms which can reform into arbitrary shapes, including humanoid shapes. So in particular they can form up into humanoid robots that look & feel exactly like humans, but are smarter and more competent in every way, and also more energy-efficient let’s say as well so that they can survive on less than 100W. What then? Seems to me like your first two arguments would just immediately fall apart. Your third, about humans still owning capital and using the proceeds to buy things that require a human touch + regulation to ban AIs from certain professions, still stands.
The “biologically imposed minimal wage” is definitely going into my arsenal of verbal tools. This is one of the clearest illustration of the same position that has been argued since the dawn of LW.
In that case I should clarify that it wasn’t my idea, I got it from someone else on Twitter (maybe Yudkowsky? I forget.)
It was Grant Slatton but Yudkowsky retweeted it
Admittedly, it’s not actually minimum wage, but a cost instead:
https://x.com/ben_golub/status/1888655365329576343
If you offer a salary below 100 watts equivalent, humans won’t accept, because accepting it would mean dying of starvation. (Unless the humans have another source of wealth, in which case this whole discussion is moot.) This is not literally a minimum wage, in the conventional sense of a legally-mandated wage floor; but it has the same effect as a minimum wage, and thus we can expect it to have the same consequences as a minimum wage.
This is obviously (from my perspective) the point that Grant Slatton was trying to make. I don’t know whether Ben Golub misunderstood that point, or was just being annoyingly pedantic. Probably the former—otherwise he could have just spelled out the details himself, instead of complaining, I figure.
Fair enough, I’m just trying to bring up the response here.
Note that it only stands if the AI is sufficiently aligned that it cares that much about obeying orders and not rocking the boat. Which I don’t think is very realistic if we’re talking that kind of crazy intelligence explosion super AI stuff. I guess the question is whether you can have “replace humans”-good AI without almost immediately having “wipes out humans, takes over the universe”-good AI.