The traditional comparative advantage discussion also, as I understand it, does not account for entities that can readily duplicate themselves in order to perform more tasks in parallel, and does not account for the possibility of wildly different transaction costs between ASI and humans vs between ASI and its non-human robot bodies. Transaction costs in this scenario include monitoring, testing, reduced quality, longer lag times. It is possible that the value of using humans to do any task at all could actually be negative, not just low.
Human analogy: you need to make dinner using $77 worth of ingredients. A toddler offers to do it instead. At what price should you take the deal? When does the toddler have comparative advantage?
Yes, all those conjectures are possible as we don’t yet know what the reality will be—it is currently all conjecture.
The counter argument to yours I think is just what opportunities is the AI giving up to do whatever humans might be left to do? What is the marginal value of all the things this ASI might be able to be doing that we cannot yet even conceive of?
I think the suggestion of a negative value is just out of scope here as it doesn’t fit into theory of comparative advantage. That was kind of the point of the OP. It is fine to say comparative advantage will not apply but we lack any proof of that and have plenty of examples where it actually does hold even when there is a clear absolute advantage for one side. Trying to reject the proposition by assuming it away seems a weak argument.
It is a lot of assumption and conjecture, that’s true. But it is not all conjecture and assumptions. When comparative advantage applies despite one side having an absolute advantage, we know why it applies. We can point to which premises of the theory are load-bearing, and know what happens when we break those premises. We can point to examples within the range of scenarios that exist among humans, where it doesn’t apply, without ever considering what other capabilities an ASI might have.
I will say I do think there’s a bit of misdirection, not by you, but by a lot of the people who like to talk about comparative advantage in this context, to the point that I find it almost funny that it’s the people questioning premises (like this post does) getting accused of making assumptions and conjectures. I’ve read a number of articles that start by talking about how comparative advantage normally means there’s value in one agent’s labor even when another has absolute advantage, which is of course true. Then they simply assume the necessary premises apply in the context of humans and ASI, without actually ever investigating that assumption, looking for limits and edge cases, or asking what actually happens if and when they don’t hold. In other words, the articles I’ve read, aren’t trying to figure out whether comparative advantage is likely to apply in this case. They’re simply assuming it will, and that those questioning this assumption or asking about the probability and conditions of it holding don’t understand the underlying theory.
For comparative advantage to apply, there are conditions. Breaking the conditions doesn’t always break comparative advantage, of course, because none of them perfectly apply in real life ever, but they are the openings that allow it to sometimes not apply. Many of these are predictably broken more often when dealing with ASI, meaning there will be more examples where comparative advantage considerations do not control the outcome.
A) Perfect factor mobility within but none between countries.
B) Zero transportation costs.
Plausibly these two apply about as well to the ASI scenario as among humans? Although with labor as a factor, human skill and knowledge act as limiters in ways that just don’t apply to ASI.
C) Constant returns to scale—untrue in general, but even small discrepancies would be much more significant if ASI typically operates at much larger o much more finely tuned scale than humans can.
D) No externalities—potentially very different in ASI scenario, since methods used for production will also be very different in many cases, and externalities will have very different impacts on ASI vs on humans.
E) Perfect information—theoretically impossible in ASI scenario, ASI will have better information and understanding thereof
F) Equivalent products that differ only in price—not true in general, quality varies by source, and ASI amplifies this gap.
For me, the relevant questions, given all this, are 1) Will comparative advantage still favor ASI hiring humans for any given tasks? 2) If so, will the wage at which ASI is better off choosing to pay humans be at or above subsistence? 3) If so, are there enough such scenarios to support the current human population? 4) Will 1-3 continue to hold in the long run? 5) Are we confident enough in 1-4 for these considerations to meaningfully affect our strategy in developing and deploying AI systems of various sorts?
I happily grant that (1) is likely. (2) is possible but I find it doubtful except in early transitional periods. (3)-(4) seem very, very implausible to me. (5) I don’t know enough about to begin to think about concretely, which means I have to assume “no” to avoid doing very stupid things.
I think I did not assume anything away. I pointed out that the theory of comparative advantage rests on assumptions, in particular autonomy. If someone can just force you to surrender your production (without a loss of production value), he will not trade with you (except maybe if he is nice).
The traditional comparative advantage discussion also, as I understand it, does not account for entities that can readily duplicate themselves in order to perform more tasks in parallel, and does not account for the possibility of wildly different transaction costs between ASI and humans vs between ASI and its non-human robot bodies. Transaction costs in this scenario include monitoring, testing, reduced quality, longer lag times. It is possible that the value of using humans to do any task at all could actually be negative, not just low.
Human analogy: you need to make dinner using $77 worth of ingredients. A toddler offers to do it instead. At what price should you take the deal? When does the toddler have comparative advantage?
Yes, all those conjectures are possible as we don’t yet know what the reality will be—it is currently all conjecture.
The counter argument to yours I think is just what opportunities is the AI giving up to do whatever humans might be left to do? What is the marginal value of all the things this ASI might be able to be doing that we cannot yet even conceive of?
I think the suggestion of a negative value is just out of scope here as it doesn’t fit into theory of comparative advantage. That was kind of the point of the OP. It is fine to say comparative advantage will not apply but we lack any proof of that and have plenty of examples where it actually does hold even when there is a clear absolute advantage for one side. Trying to reject the proposition by assuming it away seems a weak argument.
It is a lot of assumption and conjecture, that’s true. But it is not all conjecture and assumptions. When comparative advantage applies despite one side having an absolute advantage, we know why it applies. We can point to which premises of the theory are load-bearing, and know what happens when we break those premises. We can point to examples within the range of scenarios that exist among humans, where it doesn’t apply, without ever considering what other capabilities an ASI might have.
I will say I do think there’s a bit of misdirection, not by you, but by a lot of the people who like to talk about comparative advantage in this context, to the point that I find it almost funny that it’s the people questioning premises (like this post does) getting accused of making assumptions and conjectures. I’ve read a number of articles that start by talking about how comparative advantage normally means there’s value in one agent’s labor even when another has absolute advantage, which is of course true. Then they simply assume the necessary premises apply in the context of humans and ASI, without actually ever investigating that assumption, looking for limits and edge cases, or asking what actually happens if and when they don’t hold. In other words, the articles I’ve read, aren’t trying to figure out whether comparative advantage is likely to apply in this case. They’re simply assuming it will, and that those questioning this assumption or asking about the probability and conditions of it holding don’t understand the underlying theory.
For comparative advantage to apply, there are conditions. Breaking the conditions doesn’t always break comparative advantage, of course, because none of them perfectly apply in real life ever, but they are the openings that allow it to sometimes not apply. Many of these are predictably broken more often when dealing with ASI, meaning there will be more examples where comparative advantage considerations do not control the outcome.
A) Perfect factor mobility within but none between countries.
B) Zero transportation costs.
Plausibly these two apply about as well to the ASI scenario as among humans? Although with labor as a factor, human skill and knowledge act as limiters in ways that just don’t apply to ASI.
C) Constant returns to scale—untrue in general, but even small discrepancies would be much more significant if ASI typically operates at much larger o much more finely tuned scale than humans can.
D) No externalities—potentially very different in ASI scenario, since methods used for production will also be very different in many cases, and externalities will have very different impacts on ASI vs on humans.
E) Perfect information—theoretically impossible in ASI scenario, ASI will have better information and understanding thereof
F) Equivalent products that differ only in price—not true in general, quality varies by source, and ASI amplifies this gap.
For me, the relevant questions, given all this, are 1) Will comparative advantage still favor ASI hiring humans for any given tasks? 2) If so, will the wage at which ASI is better off choosing to pay humans be at or above subsistence? 3) If so, are there enough such scenarios to support the current human population? 4) Will 1-3 continue to hold in the long run? 5) Are we confident enough in 1-4 for these considerations to meaningfully affect our strategy in developing and deploying AI systems of various sorts?
I happily grant that (1) is likely. (2) is possible but I find it doubtful except in early transitional periods. (3)-(4) seem very, very implausible to me. (5) I don’t know enough about to begin to think about concretely, which means I have to assume “no” to avoid doing very stupid things.
I think I did not assume anything away. I pointed out that the theory of comparative advantage rests on assumptions, in particular autonomy. If someone can just force you to surrender your production (without a loss of production value), he will not trade with you (except maybe if he is nice).