If you imagine a hard lock on these and other such things, well that seems unrealistic to me.
I’m just trying to claim that this is possible in principle. I’m not particularly trying to argue this is realistic.
I’m just trying to argue something like “If we gave out property right to the entire universe and backchained from ensuring the reasonable enforcement of these property rights and actually did a good job on enforcement, things would be fine.”
This implicitly requires handling violations of property rights (roughly speaking) like:
War/coups/revolution/conquest
Super-persuasion and more mundane concerns of influence
I don’t know how to scalably handle AI revolution without ensuring a property basically as strong as alignment, but that seems orthogonal.
We also want to handle “AI monopolies” and “insufficient AI competition resulting in dead weight loss (or even just AIs eating more of the surplus than is necessary)”. But, we at least in theory can backchain from handling this to what intervertions are needed in practice.
I’m just trying to claim that this is possible in principle. I’m not particularly trying to argue this is realistic.
I’m just trying to argue something like “If we gave out property right to the entire universe and backchained from ensuring the reasonable enforcement of these property rights and actually did a good job on enforcement, things would be fine.”
This implicitly requires handling violations of property rights (roughly speaking) like:
War/coups/revolution/conquest
Super-persuasion and more mundane concerns of influence
I don’t know how to scalably handle AI revolution without ensuring a property basically as strong as alignment, but that seems orthogonal.
We also want to handle “AI monopolies” and “insufficient AI competition resulting in dead weight loss (or even just AIs eating more of the surplus than is necessary)”. But, we at least in theory can backchain from handling this to what intervertions are needed in practice.