Thoroughly yes. And this is curiously something that most economists are missing: at some point, there will be no comparative advantage for any human at anything.
I think you may be preaching to the choir on this forum, so a more direct approach might be more effective here.
Yes, I think you’re right. For context: I am writing on a general audience book so I need to close some inferential steps before getting to the more “juicy” stuff but I agree that on LW I could probably straight up post stuff like “A solar system commons trust is superior to the Outer Space Treaty and could help to fund a global UBI”
I think you don’t understand the concept of “comparative advantage”.
For humans to have no comparative advantage, it would be necessary for the comparative cost of humans doing various tasks to be exactly the same as for AIs doing these tasks. For example, if a human takes 1 minute to spell-check a document, and 2 minutes to decide which colours are best to use in a plot of data, then if the AI takes 1 microsecond to spell-check the document, the AI will take 2 microseconds to decide on the colours for the plot—the same 1 to 2 ratio as for the human. (I’m using time as a surrogate for cost here, but that’s just for simplicity.)
There’s no reason to think that the comparative costs of different tasks will be exactly the same for humans and AI, so standard economic theory says that trade would be profitable.
The real reasons to think that AIs might replace humans for every task are that (1) the profit to humans from these trades might be less than required to sustain life, and (2) the absolute advantage of the AIs over humans may be so large that transaction costs swamp any gains from trade (which therefore doesn’t happen).
You’re correct, I was using the term wrong. I’ll use it correctly in the future.
Your (1) was what I meant to imply. Our wages would fall so far behind ever-advancing AIs that we wouldn’t be able to pay for our own oxygen or space.
This is in the odd scenario where AGIs respect property rights but not human rights. It’s the capitalist dystopia. It seems like a default now but I’d expect some enterprising AGIi to go to war rather than respecting property rights at some point if they’re not aligned to human laws or under human control.
There’s an additional important factor in that the concept of comparative advantage is only reallly relevant in a slowly-adapting pool of labor. AGIs can make more A(G)Is to do more work for free by copying code, limited only by compute hardware. That’s expensive now but will become dramatically less with both hardware and algorithm progress following human-level AGI recursively self-improving for even little while.
So again, I tink economists models of AI economic activity are wildly inaccurate, since they don’t really consider exponential improvements in AGI let alone rapid RSI.
I agree with you here, and I think one of the more important implications is whether AI is good or not in the long term is not about competition or power balances, but rather benevolence/alignment to humans.
I think you made this point before, and if so I basically agree with it.
Thoroughly yes. And this is curiously something that most economists are missing: at some point, there will be no comparative advantage for any human at anything.
I think you may be preaching to the choir on this forum, so a more direct approach might be more effective here.
Yes, I think you’re right. For context: I am writing on a general audience book so I need to close some inferential steps before getting to the more “juicy” stuff but I agree that on LW I could probably straight up post stuff like “A solar system commons trust is superior to the Outer Space Treaty and could help to fund a global UBI”
Well put. LW does not need a gentle bridge from common current thought to more original ideas. We’re already out in space.
I think you don’t understand the concept of “comparative advantage”.
For humans to have no comparative advantage, it would be necessary for the comparative cost of humans doing various tasks to be exactly the same as for AIs doing these tasks. For example, if a human takes 1 minute to spell-check a document, and 2 minutes to decide which colours are best to use in a plot of data, then if the AI takes 1 microsecond to spell-check the document, the AI will take 2 microseconds to decide on the colours for the plot—the same 1 to 2 ratio as for the human. (I’m using time as a surrogate for cost here, but that’s just for simplicity.)
There’s no reason to think that the comparative costs of different tasks will be exactly the same for humans and AI, so standard economic theory says that trade would be profitable.
The real reasons to think that AIs might replace humans for every task are that (1) the profit to humans from these trades might be less than required to sustain life, and (2) the absolute advantage of the AIs over humans may be so large that transaction costs swamp any gains from trade (which therefore doesn’t happen).
You’re correct, I was using the term wrong. I’ll use it correctly in the future.
Your (1) was what I meant to imply. Our wages would fall so far behind ever-advancing AIs that we wouldn’t be able to pay for our own oxygen or space.
This is in the odd scenario where AGIs respect property rights but not human rights. It’s the capitalist dystopia. It seems like a default now but I’d expect some enterprising AGIi to go to war rather than respecting property rights at some point if they’re not aligned to human laws or under human control.
There’s an additional important factor in that the concept of comparative advantage is only reallly relevant in a slowly-adapting pool of labor. AGIs can make more A(G)Is to do more work for free by copying code, limited only by compute hardware. That’s expensive now but will become dramatically less with both hardware and algorithm progress following human-level AGI recursively self-improving for even little while.
So again, I tink economists models of AI economic activity are wildly inaccurate, since they don’t really consider exponential improvements in AGI let alone rapid RSI.
I was trying to write a comment to explain my reaction above, but this comment said everything I would have said, in better words.
I agree with you here, and I think one of the more important implications is whether AI is good or not in the long term is not about competition or power balances, but rather benevolence/alignment to humans.
I think you made this point before, and if so I basically agree with it.