So long as property rights are respected, humans will continue to have a comparative advantage in something, and whatever that is we will be much richer in a world with hyper-competitive AGI than we are today.
I don’t think this is right? Consider the following toy example. Suppose there’s a human who doesn’t own anything except his own labor. He consumes 1 unit of raw materials (RM) per day to survive and can use his labor to turn 1 unit of RM into 1 paperclip or 2 staples per hour. Then someone invents an AI that takes 1 unit of RM to build, 1 unit of RM per day to maintain, and can turn 1 unit of RM into 3 paperclips or 3 staples per hour. (Let’s say he makes the AI open source so anyone can build it and there’s perfect competition among the AIs.) Even though the human seemingly has a comparative advantage in making staples, nobody would hire him to make either staples or paperclips anymore so he quickly starves to death (absent some kind of welfare/transfer scheme).
I’m generally a fan of comparative advantage when it comes to typical human situations, but it doesn’t seem applicable in this example. The example must violate some assumptions behind the theory, but I’m not sure what.
The example must violate some assumptions behind the theory, but I’m not sure what.
Possibly because there is a harder limit on humans than on AI? Humans don’t replicate very well.
On a second thought, I don’t think comparative holds if demand is exhausted. Comparative Advantage(at least the Ricardo version i know of) only focuses on the maximum amount of goods, not if they’re actually needed. If there was more demand for paperclips/staples than there is Production by AI(s), Humans would focus on staples and AI (more) on paperclips.
The example must violate some assumptions behind the theory, but I’m not sure what.
The theory is typically explained using situations where people produce the things they consume. Like, the “human” would literally eat either that 1 paperclip or those 2 staples and survive… and in the future, he could trade the 2 staples for a paperclip and a half, and enjoy the glorious wealth of paperclip-topia.
Also, in the textbook situations the raw materials cannot be traded or taken away. Humans live on one planet, AIs live on another planet, and they only exchange spaceships full of paperclips and staples.
Thus, the theory would apply if each individual human could survive without the trade (e.g. growing food in their garden) and only participate in the trade voluntarily. But the current situation is such that most people cannot survive in their gardens only; many of them don’t even have gardens. The resources they actually own are their bodies and their labor, plus some savings, and when their labor becomes uncompetitive and the savings are spent on keeping the body alive...
Consider the competitive advantages of horses. Not sufficient to keep their population alive at the historical numbers.
You are correct. Free trade in general produces winners/losers and while on average people become better off there is no guarantee that individuals will become richer absent some form of redistribution.
In practice humans have the ability to learn new skills/shift jobs so we mostly ignore the redistribution part, but in an absolute worst case there should be some kind of UBI to accommodate the losers of competition with AGI (perhaps paid out of the “future commons” tax).
I don’t think this is right? Consider the following toy example. Suppose there’s a human who doesn’t own anything except his own labor. He consumes 1 unit of raw materials (RM) per day to survive and can use his labor to turn 1 unit of RM into 1 paperclip or 2 staples per hour. Then someone invents an AI that takes 1 unit of RM to build, 1 unit of RM per day to maintain, and can turn 1 unit of RM into 3 paperclips or 3 staples per hour. (Let’s say he makes the AI open source so anyone can build it and there’s perfect competition among the AIs.) Even though the human seemingly has a comparative advantage in making staples, nobody would hire him to make either staples or paperclips anymore so he quickly starves to death (absent some kind of welfare/transfer scheme).
I’m generally a fan of comparative advantage when it comes to typical human situations, but it doesn’t seem applicable in this example. The example must violate some assumptions behind the theory, but I’m not sure what.
Possibly because there is a harder limit on humans than on AI? Humans don’t replicate very well.
On a second thought, I don’t think comparative holds if demand is exhausted. Comparative Advantage(at least the Ricardo version i know of) only focuses on the maximum amount of goods, not if they’re actually needed. If there was more demand for paperclips/staples than there is Production by AI(s), Humans would focus on staples and AI (more) on paperclips.
The theory is typically explained using situations where people produce the things they consume. Like, the “human” would literally eat either that 1 paperclip or those 2 staples and survive… and in the future, he could trade the 2 staples for a paperclip and a half, and enjoy the glorious wealth of paperclip-topia.
Also, in the textbook situations the raw materials cannot be traded or taken away. Humans live on one planet, AIs live on another planet, and they only exchange spaceships full of paperclips and staples.
Thus, the theory would apply if each individual human could survive without the trade (e.g. growing food in their garden) and only participate in the trade voluntarily. But the current situation is such that most people cannot survive in their gardens only; many of them don’t even have gardens. The resources they actually own are their bodies and their labor, plus some savings, and when their labor becomes uncompetitive and the savings are spent on keeping the body alive...
Consider the competitive advantages of horses. Not sufficient to keep their population alive at the historical numbers.
You are correct. Free trade in general produces winners/losers and while on average people become better off there is no guarantee that individuals will become richer absent some form of redistribution.
In practice humans have the ability to learn new skills/shift jobs so we mostly ignore the redistribution part, but in an absolute worst case there should be some kind of UBI to accommodate the losers of competition with AGI (perhaps paid out of the “future commons” tax).