Even if you have long term preferences, bold of you to assume that these preferences will stay stable in a world with AIs. I expect an AI, being smarter than a human, can just talk you into signing away the stuff you care about. It’ll be like money-naive people vs loan sharks, times 1000.
I expect an AI, being smarter than a human, can just talk you into signing away the stuff you care about. It’ll be like money-naive people vs loan sharks, times 1000.
I think this is just a special case of more direct harms/theft? Like imagine that some humans developed the ability to mind control others, this can probably be handled via the normal system of laws etc. The situation gets more confusing as the AIs are more continuous with more mundane persuasion (that we currently allow in our society). But, I still think you can build a broadly liberal society which handles super-persuasion.
If nothing else, I expect mildly-superhuman sales and advertising will be enough to ensure that the human share of the universe will decrease over time. And I expect the laws will continue being at least mildly influenced by deep pockets, to keep at least some such avenues possible. If you imagine a hard lock on these and other such things, well that seems unrealistic to me.
If you imagine a hard lock on these and other such things, well that seems unrealistic to me.
I’m just trying to claim that this is possible in principle. I’m not particularly trying to argue this is realistic.
I’m just trying to argue something like “If we gave out property right to the entire universe and backchained from ensuring the reasonable enforcement of these property rights and actually did a good job on enforcement, things would be fine.”
This implicitly requires handling violations of property rights (roughly speaking) like:
War/coups/revolution/conquest
Super-persuasion and more mundane concerns of influence
I don’t know how to scalably handle AI revolution without ensuring a property basically as strong as alignment, but that seems orthogonal.
We also want to handle “AI monopolies” and “insufficient AI competition resulting in dead weight loss (or even just AIs eating more of the surplus than is necessary)”. But, we at least in theory can backchain from handling this to what intervertions are needed in practice.
Even if you have long term preferences, bold of you to assume that these preferences will stay stable in a world with AIs. I expect an AI, being smarter than a human, can just talk you into signing away the stuff you care about. It’ll be like money-naive people vs loan sharks, times 1000.
I think this is just a special case of more direct harms/theft? Like imagine that some humans developed the ability to mind control others, this can probably be handled via the normal system of laws etc. The situation gets more confusing as the AIs are more continuous with more mundane persuasion (that we currently allow in our society). But, I still think you can build a broadly liberal society which handles super-persuasion.
If nothing else, I expect mildly-superhuman sales and advertising will be enough to ensure that the human share of the universe will decrease over time. And I expect the laws will continue being at least mildly influenced by deep pockets, to keep at least some such avenues possible. If you imagine a hard lock on these and other such things, well that seems unrealistic to me.
I’m just trying to claim that this is possible in principle. I’m not particularly trying to argue this is realistic.
I’m just trying to argue something like “If we gave out property right to the entire universe and backchained from ensuring the reasonable enforcement of these property rights and actually did a good job on enforcement, things would be fine.”
This implicitly requires handling violations of property rights (roughly speaking) like:
War/coups/revolution/conquest
Super-persuasion and more mundane concerns of influence
I don’t know how to scalably handle AI revolution without ensuring a property basically as strong as alignment, but that seems orthogonal.
We also want to handle “AI monopolies” and “insufficient AI competition resulting in dead weight loss (or even just AIs eating more of the surplus than is necessary)”. But, we at least in theory can backchain from handling this to what intervertions are needed in practice.