My concern is that it really seems like humans won’t have even a comparative advantage at anything for very long, because new, more efficient workers can be spun up on demand.
The difference from standard economics is that there isn’t a roughly fixed or slowly growing population of workers; there are workers being created in the way goods are now created. I think this probably breaks pretty much all existing theories of labor economics (or changes the conclusions very dramatically) And worse, they are zero-cost to duplicate, requiring only the compute to run them.
With a little specialization, it seems like each task will have a bunch of AIs designed to specialize in it, or at least “want” to do it and do it more efficiently than any human can. It seems like this would eliminate any comparative advantage for any work other than human-specific entertainment, should such a demand exist.
You comment that they may be constrained by compute or power. That seems like a poor place to pin long-term hopes. They will be for a while, but the energy necessary to do more computation than the human brain is really not very large, if you keep increasing compute efficiency. Which of course they’d want to do pretty quickly.
So it seems like guaranteeing legal rights to work and own property to every AGI isn’t a good idea. It really seems to me very likely to be an attractive short-term solution that ends with humanity very likely outcompeted and dead (except for charity, which is the point of alignment).
But I think the core of your proposal can be retained while avoiding those nasty long-term consequences. We can agree to give a right to life and to own property to the first AGIs without extending that right to infinity if they keep spinning out more. That should help put them on our side of the alignment issue and achieving nonproliferation of RSI-capable AGI. And we might cap the wealth they can own or come up with some other clause to keep them from creating non-sapient subagents that would allow one entity to effectively do an unlimited amount of work more efficiently than humans can, and winding up owning everything.
The other problem is that we’re expecting those AGIs to honor our property rights, even once they’ve reached a position where they don’t have to; they could safely take over if they wanted. They’ll honor the agreement if they’re aligned, but it seems like otherwise they won’t. So you might prevent the first non-aligned AGI from taking over, but only to give it time to gather resources to make that takeover more certain. That might provide time to get another aligned AGI into the picture, but probably not due to the exponential nature of RSI progress.
So the above scenarios of economic takeover really only apply if there’s either alignment making them want to honor agreements; if you can do that, why not align them to enjoy helping humans? Or if there’s a balance of power like there is for humans, so that even sociopaths largely participate in the economy and honor laws most of the time. That logic does not apply to an entity that can copy itself and make itself smarter; if it acquires enough resources, it doesn’t need anyone to cooperate with it to achieve its goals, unlike humans that are each limited in physical and intelllectual capacity, and so need collaborators to achieve lofty goals.
So I’m afraid this proposal doesn’t really offer much help with the alignment problem.
Thanks for responding to that rant!
My concern is that it really seems like humans won’t have even a comparative advantage at anything for very long, because new, more efficient workers can be spun up on demand.
The difference from standard economics is that there isn’t a roughly fixed or slowly growing population of workers; there are workers being created in the way goods are now created. I think this probably breaks pretty much all existing theories of labor economics (or changes the conclusions very dramatically) And worse, they are zero-cost to duplicate, requiring only the compute to run them.
With a little specialization, it seems like each task will have a bunch of AIs designed to specialize in it, or at least “want” to do it and do it more efficiently than any human can. It seems like this would eliminate any comparative advantage for any work other than human-specific entertainment, should such a demand exist.
You comment that they may be constrained by compute or power. That seems like a poor place to pin long-term hopes. They will be for a while, but the energy necessary to do more computation than the human brain is really not very large, if you keep increasing compute efficiency. Which of course they’d want to do pretty quickly.
So it seems like guaranteeing legal rights to work and own property to every AGI isn’t a good idea. It really seems to me very likely to be an attractive short-term solution that ends with humanity very likely outcompeted and dead (except for charity, which is the point of alignment).
But I think the core of your proposal can be retained while avoiding those nasty long-term consequences. We can agree to give a right to life and to own property to the first AGIs without extending that right to infinity if they keep spinning out more. That should help put them on our side of the alignment issue and achieving nonproliferation of RSI-capable AGI. And we might cap the wealth they can own or come up with some other clause to keep them from creating non-sapient subagents that would allow one entity to effectively do an unlimited amount of work more efficiently than humans can, and winding up owning everything.
The other problem is that we’re expecting those AGIs to honor our property rights, even once they’ve reached a position where they don’t have to; they could safely take over if they wanted. They’ll honor the agreement if they’re aligned, but it seems like otherwise they won’t. So you might prevent the first non-aligned AGI from taking over, but only to give it time to gather resources to make that takeover more certain. That might provide time to get another aligned AGI into the picture, but probably not due to the exponential nature of RSI progress.
So the above scenarios of economic takeover really only apply if there’s either alignment making them want to honor agreements; if you can do that, why not align them to enjoy helping humans? Or if there’s a balance of power like there is for humans, so that even sociopaths largely participate in the economy and honor laws most of the time. That logic does not apply to an entity that can copy itself and make itself smarter; if it acquires enough resources, it doesn’t need anyone to cooperate with it to achieve its goals, unlike humans that are each limited in physical and intelllectual capacity, and so need collaborators to achieve lofty goals.
So I’m afraid this proposal doesn’t really offer much help with the alignment problem.