Comparative advantage explains how to make use of inefficient agents, so that ignoring them is a worse option. But if you can convert them into something else, you are no longer comparing the gain from trading with them to indifference of ignoring them, you are comparing the gain from trading with them to the gain from converting them. And if they can be cheaply converted into something much more efficient than they are, converting them is the winning move. This is a move largely not available to the present society, hence its absence is a reasonable assumption for now but one that breaks when you consider indifferent smart AGI.
The law of comparative advantage relies on some implicit assumptions that are not likely to hold between a superintelligence and humans:
The transactions costs must be small enough not to negate the gains from trade. A superintelligence may require more resources to issue a trade request to slow thinking humans and to receive the result, while possibly letting processes idle while waiting for the result, than to just do it itself.
Your trading partner must not have the option of building a more desirable trading partner out of your component parts. A superintelligence could get more productivity of atoms arranged as an extension of itself than atoms arranged as humans. (ETA: See Nesov’s comment.)
A sufficiently clever AI should understand Comparative Advantage
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
It really doesn’t take that much human-level intelligence to change how things are done—all it takes is a lack of attachment to the current ways.
And that’s perhaps the biggest “natural resource” an AI has: the lack of status quo bias.
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
I don’t understand what are you arguing for. That people become better off doing something different, doesn’t necessarily imply that they become obsolete, or even that they can’t continue doing the less-efficient thing.
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
I’m not sure I understand what “shift the comparative advantage” could mean, and I have no idea why this is supposed to be a response to my point.
Maybe I didn’t make my point clearly enough. My contention is that even if an AI is better at absolutely everything than a human being, it could still be better off trading with human beings for certain goods, for the simple reason that it can’t do everything, and in such a scenario both human beings and the AI would get gains from trade.
As Nesov points out, if the AI has the option of, say, converting human beings into computational substrate and using them to simulate new versions of itself, then this ceases to be relevant.
A sufficiently clever AI should understand Comparative Advantage
Comparative advantage explains how to make use of inefficient agents, so that ignoring them is a worse option. But if you can convert them into something else, you are no longer comparing the gain from trading with them to indifference of ignoring them, you are comparing the gain from trading with them to the gain from converting them. And if they can be cheaply converted into something much more efficient than they are, converting them is the winning move. This is a move largely not available to the present society, hence its absence is a reasonable assumption for now but one that breaks when you consider indifferent smart AGI.
The law of comparative advantage relies on some implicit assumptions that are not likely to hold between a superintelligence and humans:
The transactions costs must be small enough not to negate the gains from trade. A superintelligence may require more resources to issue a trade request to slow thinking humans and to receive the result, while possibly letting processes idle while waiting for the result, than to just do it itself.
Your trading partner must not have the option of building a more desirable trading partner out of your component parts. A superintelligence could get more productivity of atoms arranged as an extension of itself than atoms arranged as humans. (ETA: See Nesov’s comment.)
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
It really doesn’t take that much human-level intelligence to change how things are done—all it takes is a lack of attachment to the current ways.
And that’s perhaps the biggest “natural resource” an AI has: the lack of status quo bias.
I don’t understand what are you arguing for. That people become better off doing something different, doesn’t necessarily imply that they become obsolete, or even that they can’t continue doing the less-efficient thing.
I’m not sure I understand what “shift the comparative advantage” could mean, and I have no idea why this is supposed to be a response to my point.
Maybe I didn’t make my point clearly enough. My contention is that even if an AI is better at absolutely everything than a human being, it could still be better off trading with human beings for certain goods, for the simple reason that it can’t do everything, and in such a scenario both human beings and the AI would get gains from trade.
As Nesov points out, if the AI has the option of, say, converting human beings into computational substrate and using them to simulate new versions of itself, then this ceases to be relevant.