The assumed superintelligence can take what it wants to take, and if people could “produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight”, then it could probably force people to produce it.
I was with you until this sentence. This does not follow.
Let’s suppose “$77 worth of sunlight” has a consistent, agreed upon meaning. Maybe “Enough sunlight to generate $77 worth of electricity (at current production cost of $0.04/kWh) with current human-made solar panels over their 25 yr lifespan.” This is a little less than what falls on an average plot of land on the order of 20cmx20cm. The superintelligence could hire humans to build the solar panels, or use the electricity to run human-made equipment, or farm the plot to grow about 40g of corn.
What can a superintelligence do with that sunlight? Well, it can develop highly optimized 20-junction solar panels using advanced robotic facilities that can then generate 3x as much electricity. Maybe it has space-based manufacturing so it can use space-based solar and get 10-15x more electricity. It can use the electricity to run superintelligently-designed equipment with greater efficiency and higher quality output than human minds and hands can invent, build, and operate. These options themselves include things like building indoor robot-operated farms that optimally distribute light/water/heat/nutrients to generate >10x more crops per unit area (>40x more with the aforementioned space-based production facilities), or directly chemically synthesize specific desired molecules.
In other words: to have humans produce what a superintelligence could produce from $77 of sunlight, it would cost the superintelligence many times more than $77, and the output quality would be much lower.
Right; my point was just that the hypothetical superintelligence does not need to trade with humans if it can force them; therefore trade-related arguments are not relevant. However, it is of course likely that such a superintelligence would neither want to trade nor care enough about the production of humans to force them to do anything.
Ok, then I agree. As written, it read to me like you were closing by suggesting the AI would want to go for the “conqueror takes both” option instead of the “give the natives smallpox and drive them from their ancestral homes while committing genocide” option.
I’m not sure that is the correct take in the context of Comparative Advantage.
It would not matter if the SI could produce more than humans in a direct comparison but what the opportunity cost for the SI might be. If the ASI is shifting efforts that would have produced more value to it than it gets from the $77 sunlight output AND that delta in value is greater than the lower productivity of the humans then the trade makes sense to the ASI.
Seems to me the questions here are about resource constraints and whether or not an ASI does or does not need to confront them in a meaningful way.
The traditional comparative advantage discussion also, as I understand it, does not account for entities that can readily duplicate themselves in order to perform more tasks in parallel, and does not account for the possibility of wildly different transaction costs between ASI and humans vs between ASI and its non-human robot bodies. Transaction costs in this scenario include monitoring, testing, reduced quality, longer lag times. It is possible that the value of using humans to do any task at all could actually be negative, not just low.
Human analogy: you need to make dinner using $77 worth of ingredients. A toddler offers to do it instead. At what price should you take the deal? When does the toddler have comparative advantage?
Yes, all those conjectures are possible as we don’t yet know what the reality will be—it is currently all conjecture.
The counter argument to yours I think is just what opportunities is the AI giving up to do whatever humans might be left to do? What is the marginal value of all the things this ASI might be able to be doing that we cannot yet even conceive of?
I think the suggestion of a negative value is just out of scope here as it doesn’t fit into theory of comparative advantage. That was kind of the point of the OP. It is fine to say comparative advantage will not apply but we lack any proof of that and have plenty of examples where it actually does hold even when there is a clear absolute advantage for one side. Trying to reject the proposition by assuming it away seems a weak argument.
It is a lot of assumption and conjecture, that’s true. But it is not all conjecture and assumptions. When comparative advantage applies despite one side having an absolute advantage, we know why it applies. We can point to which premises of the theory are load-bearing, and know what happens when we break those premises. We can point to examples within the range of scenarios that exist among humans, where it doesn’t apply, without ever considering what other capabilities an ASI might have.
I will say I do think there’s a bit of misdirection, not by you, but by a lot of the people who like to talk about comparative advantage in this context, to the point that I find it almost funny that it’s the people questioning premises (like this post does) getting accused of making assumptions and conjectures. I’ve read a number of articles that start by talking about how comparative advantage normally means there’s value in one agent’s labor even when another has absolute advantage, which is of course true. Then they simply assume the necessary premises apply in the context of humans and ASI, without actually ever investigating that assumption, looking for limits and edge cases, or asking what actually happens if and when they don’t hold. In other words, the articles I’ve read, aren’t trying to figure out whether comparative advantage is likely to apply in this case. They’re simply assuming it will, and that those questioning this assumption or asking about the probability and conditions of it holding don’t understand the underlying theory.
For comparative advantage to apply, there are conditions. Breaking the conditions doesn’t always break comparative advantage, of course, because none of them perfectly apply in real life ever, but they are the openings that allow it to sometimes not apply. Many of these are predictably broken more often when dealing with ASI, meaning there will be more examples where comparative advantage considerations do not control the outcome.
A) Perfect factor mobility within but none between countries.
B) Zero transportation costs.
Plausibly these two apply about as well to the ASI scenario as among humans? Although with labor as a factor, human skill and knowledge act as limiters in ways that just don’t apply to ASI.
C) Constant returns to scale—untrue in general, but even small discrepancies would be much more significant if ASI typically operates at much larger o much more finely tuned scale than humans can.
D) No externalities—potentially very different in ASI scenario, since methods used for production will also be very different in many cases, and externalities will have very different impacts on ASI vs on humans.
E) Perfect information—theoretically impossible in ASI scenario, ASI will have better information and understanding thereof
F) Equivalent products that differ only in price—not true in general, quality varies by source, and ASI amplifies this gap.
For me, the relevant questions, given all this, are 1) Will comparative advantage still favor ASI hiring humans for any given tasks? 2) If so, will the wage at which ASI is better off choosing to pay humans be at or above subsistence? 3) If so, are there enough such scenarios to support the current human population? 4) Will 1-3 continue to hold in the long run? 5) Are we confident enough in 1-4 for these considerations to meaningfully affect our strategy in developing and deploying AI systems of various sorts?
I happily grant that (1) is likely. (2) is possible but I find it doubtful except in early transitional periods. (3)-(4) seem very, very implausible to me. (5) I don’t know enough about to begin to think about concretely, which means I have to assume “no” to avoid doing very stupid things.
I think I did not assume anything away. I pointed out that the theory of comparative advantage rests on assumptions, in particular autonomy. If someone can just force you to surrender your production (without a loss of production value), he will not trade with you (except maybe if he is nice).
I was with you until this sentence. This does not follow.
Let’s suppose “$77 worth of sunlight” has a consistent, agreed upon meaning. Maybe “Enough sunlight to generate $77 worth of electricity (at current production cost of $0.04/kWh) with current human-made solar panels over their 25 yr lifespan.” This is a little less than what falls on an average plot of land on the order of 20cmx20cm. The superintelligence could hire humans to build the solar panels, or use the electricity to run human-made equipment, or farm the plot to grow about 40g of corn.
What can a superintelligence do with that sunlight? Well, it can develop highly optimized 20-junction solar panels using advanced robotic facilities that can then generate 3x as much electricity. Maybe it has space-based manufacturing so it can use space-based solar and get 10-15x more electricity. It can use the electricity to run superintelligently-designed equipment with greater efficiency and higher quality output than human minds and hands can invent, build, and operate. These options themselves include things like building indoor robot-operated farms that optimally distribute light/water/heat/nutrients to generate >10x more crops per unit area (>40x more with the aforementioned space-based production facilities), or directly chemically synthesize specific desired molecules.
In other words: to have humans produce what a superintelligence could produce from $77 of sunlight, it would cost the superintelligence many times more than $77, and the output quality would be much lower.
Right; my point was just that the hypothetical superintelligence does not need to trade with humans if it can force them; therefore trade-related arguments are not relevant. However, it is of course likely that such a superintelligence would neither want to trade nor care enough about the production of humans to force them to do anything.
Ok, then I agree. As written, it read to me like you were closing by suggesting the AI would want to go for the “conqueror takes both” option instead of the “give the natives smallpox and drive them from their ancestral homes while committing genocide” option.
I’m not sure that is the correct take in the context of Comparative Advantage.
It would not matter if the SI could produce more than humans in a direct comparison but what the opportunity cost for the SI might be. If the ASI is shifting efforts that would have produced more value to it than it gets from the $77 sunlight output AND that delta in value is greater than the lower productivity of the humans then the trade makes sense to the ASI.
Seems to me the questions here are about resource constraints and whether or not an ASI does or does not need to confront them in a meaningful way.
The traditional comparative advantage discussion also, as I understand it, does not account for entities that can readily duplicate themselves in order to perform more tasks in parallel, and does not account for the possibility of wildly different transaction costs between ASI and humans vs between ASI and its non-human robot bodies. Transaction costs in this scenario include monitoring, testing, reduced quality, longer lag times. It is possible that the value of using humans to do any task at all could actually be negative, not just low.
Human analogy: you need to make dinner using $77 worth of ingredients. A toddler offers to do it instead. At what price should you take the deal? When does the toddler have comparative advantage?
Yes, all those conjectures are possible as we don’t yet know what the reality will be—it is currently all conjecture.
The counter argument to yours I think is just what opportunities is the AI giving up to do whatever humans might be left to do? What is the marginal value of all the things this ASI might be able to be doing that we cannot yet even conceive of?
I think the suggestion of a negative value is just out of scope here as it doesn’t fit into theory of comparative advantage. That was kind of the point of the OP. It is fine to say comparative advantage will not apply but we lack any proof of that and have plenty of examples where it actually does hold even when there is a clear absolute advantage for one side. Trying to reject the proposition by assuming it away seems a weak argument.
It is a lot of assumption and conjecture, that’s true. But it is not all conjecture and assumptions. When comparative advantage applies despite one side having an absolute advantage, we know why it applies. We can point to which premises of the theory are load-bearing, and know what happens when we break those premises. We can point to examples within the range of scenarios that exist among humans, where it doesn’t apply, without ever considering what other capabilities an ASI might have.
I will say I do think there’s a bit of misdirection, not by you, but by a lot of the people who like to talk about comparative advantage in this context, to the point that I find it almost funny that it’s the people questioning premises (like this post does) getting accused of making assumptions and conjectures. I’ve read a number of articles that start by talking about how comparative advantage normally means there’s value in one agent’s labor even when another has absolute advantage, which is of course true. Then they simply assume the necessary premises apply in the context of humans and ASI, without actually ever investigating that assumption, looking for limits and edge cases, or asking what actually happens if and when they don’t hold. In other words, the articles I’ve read, aren’t trying to figure out whether comparative advantage is likely to apply in this case. They’re simply assuming it will, and that those questioning this assumption or asking about the probability and conditions of it holding don’t understand the underlying theory.
For comparative advantage to apply, there are conditions. Breaking the conditions doesn’t always break comparative advantage, of course, because none of them perfectly apply in real life ever, but they are the openings that allow it to sometimes not apply. Many of these are predictably broken more often when dealing with ASI, meaning there will be more examples where comparative advantage considerations do not control the outcome.
A) Perfect factor mobility within but none between countries.
B) Zero transportation costs.
Plausibly these two apply about as well to the ASI scenario as among humans? Although with labor as a factor, human skill and knowledge act as limiters in ways that just don’t apply to ASI.
C) Constant returns to scale—untrue in general, but even small discrepancies would be much more significant if ASI typically operates at much larger o much more finely tuned scale than humans can.
D) No externalities—potentially very different in ASI scenario, since methods used for production will also be very different in many cases, and externalities will have very different impacts on ASI vs on humans.
E) Perfect information—theoretically impossible in ASI scenario, ASI will have better information and understanding thereof
F) Equivalent products that differ only in price—not true in general, quality varies by source, and ASI amplifies this gap.
For me, the relevant questions, given all this, are 1) Will comparative advantage still favor ASI hiring humans for any given tasks? 2) If so, will the wage at which ASI is better off choosing to pay humans be at or above subsistence? 3) If so, are there enough such scenarios to support the current human population? 4) Will 1-3 continue to hold in the long run? 5) Are we confident enough in 1-4 for these considerations to meaningfully affect our strategy in developing and deploying AI systems of various sorts?
I happily grant that (1) is likely. (2) is possible but I find it doubtful except in early transitional periods. (3)-(4) seem very, very implausible to me. (5) I don’t know enough about to begin to think about concretely, which means I have to assume “no” to avoid doing very stupid things.
I think I did not assume anything away. I pointed out that the theory of comparative advantage rests on assumptions, in particular autonomy. If someone can just force you to surrender your production (without a loss of production value), he will not trade with you (except maybe if he is nice).