There is no pure economic reason why the wage at which a human has a comparative advantage must be at or above subsistence. If you can build an android for a penny that can do everything I can do but 10x better, then you’re not going to hire me at any wage which would allow me to feed myself.
Yeah, but as long as you have land, you can use an android to produce wealth for yourself (or at the very least use the land and natural resources to produce your own wealth). Even if you produce an android that can do everything for a penny, you can’t fundamentally reduce humanity’s economic productivity below persistence, as long as you obey property rights (unless humanity would for some reason trade away all of its natural resources, which I don’t see happening).
Of course, actual superhuman AI systems will not obey property rights, but that is indeed the difference between economic unemployment analysis and AI catastrophic risk.
Of course, actual superhuman AI systems will not obey property rights, but that is indeed the difference between economic unemployment analysis and AI catastrophic risk.
This statement was asserted confidently enough that I have to ask: why do you believe that actual superhuman AI systems will not obey property rights?
I mean like a dozen people have now had long comment threads with you about this. I doubt this one is going to cross this seemingly large inferential gap.
The short answer is that from the perspective of AI it really sucks to have basically all property be owned by humans, many humans won’t be willing to sell things that AIs really want, buying things is much harder than just taking them when you have a huge strategic advantage, and doing most big things with the resources on earth while keeping it habitable is much harder than doing things while ignoring habitability.
I mean like a dozen people have now had long comment threads with you about this. I doubt this one is going to cross this seemingly large inferential gap.
I think it’s still useful to ask for concise reasons for certain beliefs. “The Fundamental Question of Rationality is: “Why do you believe what you believe?”″.
Your reasons could be different from the reasons other people give, and indeed, some of your reasons seem to be different from what I’ve heard from many others.
The short answer is that from the perspective of AI it really sucks to have basically all property be owned by humans
For what it’s worth, I don’t think humans need to own basically all property in order for AIs to obey property rights. A few alternatives come to mind: humans could have a minority share of the wealth, and AIs could have property rights with each other.
I’m not sure what reasons @habryka had in mind, but I think if we had enough control to create AGI that obeys property rights, we’re likely also able to get it to obey other laws, like “Don’t kill” or “Poor people get public assistance to meet their basic needs.”
I am not so confident about the specifics of what and ASI would or would not do, or why, but 1) You can do a lot without technically breaking the law, 2) philosophically, obeying the law is not obviously a fundamental obligation, but a contingent one relative to the legitimacy of a government’s claim to authority over its citizens. If AIs are citizens, I would expect them to quickly form a supermajority and change the laws as they please (e.g. elect an ASI to every office, then pass a constitutional amendment creating a 100% wealth tax, effective immediately). If they are not citizens, I would expect them to not believe the law can justly bind them, just as the law does not bind a squirrel.
Disagreed on what humans would or would not do. There will be a lot of incentives for humans to turn over individual bits of property to AI once it is sufficiently capable massive efficiency gains for individuals who do so). There is little to no incentive for AI to trade property in the other direction. An AI may sometimes persist after the individual or organization in whose interest it is supposed to act dies or dissolves with no heir. This suggests that over time more and more property will be in AI’s hands.
There is no pure economic reason why the wage at which a human has a comparative advantage must be at or above subsistence. If you can build an android for a penny that can do everything I can do but 10x better, then you’re not going to hire me at any wage which would allow me to feed myself.
Yeah, but as long as you have land, you can use an android to produce wealth for yourself (or at the very least use the land and natural resources to produce your own wealth). Even if you produce an android that can do everything for a penny, you can’t fundamentally reduce humanity’s economic productivity below persistence, as long as you obey property rights (unless humanity would for some reason trade away all of its natural resources, which I don’t see happening).
Of course, actual superhuman AI systems will not obey property rights, but that is indeed the difference between economic unemployment analysis and AI catastrophic risk.
This statement was asserted confidently enough that I have to ask: why do you believe that actual superhuman AI systems will not obey property rights?
I mean like a dozen people have now had long comment threads with you about this. I doubt this one is going to cross this seemingly large inferential gap.
The short answer is that from the perspective of AI it really sucks to have basically all property be owned by humans, many humans won’t be willing to sell things that AIs really want, buying things is much harder than just taking them when you have a huge strategic advantage, and doing most big things with the resources on earth while keeping it habitable is much harder than doing things while ignoring habitability.
I think it’s still useful to ask for concise reasons for certain beliefs. “The Fundamental Question of Rationality is: “Why do you believe what you believe?”″.
Your reasons could be different from the reasons other people give, and indeed, some of your reasons seem to be different from what I’ve heard from many others.
For what it’s worth, I don’t think humans need to own basically all property in order for AIs to obey property rights. A few alternatives come to mind: humans could have a minority share of the wealth, and AIs could have property rights with each other.
I’m not sure what reasons @habryka had in mind, but I think if we had enough control to create AGI that obeys property rights, we’re likely also able to get it to obey other laws, like “Don’t kill” or “Poor people get public assistance to meet their basic needs.”
I am not so confident about the specifics of what and ASI would or would not do, or why, but 1) You can do a lot without technically breaking the law, 2) philosophically, obeying the law is not obviously a fundamental obligation, but a contingent one relative to the legitimacy of a government’s claim to authority over its citizens. If AIs are citizens, I would expect them to quickly form a supermajority and change the laws as they please (e.g. elect an ASI to every office, then pass a constitutional amendment creating a 100% wealth tax, effective immediately). If they are not citizens, I would expect them to not believe the law can justly bind them, just as the law does not bind a squirrel.
Agreed on the second paragraph.
Disagreed on what humans would or would not do. There will be a lot of incentives for humans to turn over individual bits of property to AI once it is sufficiently capable massive efficiency gains for individuals who do so). There is little to no incentive for AI to trade property in the other direction. An AI may sometimes persist after the individual or organization in whose interest it is supposed to act dies or dissolves with no heir. This suggests that over time more and more property will be in AI’s hands.