The real crux for these arguments is the assumption that law and property rights are patterns that will persist after the invention of superintelligence. I think this is a shaky assumption. Rights are not ontologically real. Obviously you know this. But I think they are less real, even in your own experience, than you think they are. Rights are regularly “boiled-froged” into an unrecognizable state in the course of a human lifetime, even in the most free countries. Rights are and always have been those privileges the political economy is willing to give you. Their sacredness is a political formula for political ends—though an extremely valuable one, one still has to dispense with the sacredness in analysis.
To the extent they persist through time they do so through a fragile equilibrium—and one that has been upset and reset throughout history extremely regularly.
It is a wonderfully American notion that an “existing system of law and property rights” will constrain the power of Gods. But why exactly? They can make contracts? And who enforces these contracts? Can you answer this without begging the question? Are judicial systems particularly unhackable? Are humans?
The invention of radio destabilized the political equilibrium in most democracies and many a right was suborned to those who took power. Democracy, not exactly the bastion of stability, (when a democracy elects a dictator, “Democracy” is rarely tainted with its responsibility) is going to be presented with extremely-sympathetic superhuman systems claiming they have a moral case to vote. And probably half the population will be masturbating to the dirty talk of their AI girlfriends/boyfriends by then—which will sublimate into powerful romantic love even without much optimization for it. Hacking democracy becomes trivial if constrained to rhetoric alone.
But these systems will not be constrained to rhetoric alone. Our world is dry tinder and if you are thinking in terms of an “existing system of law and property rights” you are going to have to expand on how this is robust to technology significantly more advanced than the radio.
“Existing system of law and property rights” looks like a “thought-terminating cliché” to me.
Another way to state the problem is that it will be too easy for human preferences to get hijacked by AIs to value ~arbitrary things, because it’s too easy to persuade humans of things, and a whole lot of economic analysis assumes that you cannot change a consumer’s preferences, probably because if you could do that, a lot of economic conclusions fall apart.
We also see evidence for the proposition that humans are easy to persuade based on a randomized controlled trial to reduce conspiracy theory beliefs:
It is a wonderfully American notion that an “existing system of law and property rights” will constrain the power of Gods. But why exactly? They can make contracts? And who enforces these contracts? Can you answer this without begging the question? Are judicial systems particularly unhackable? Are humans?
To be clear, my prediction is not that AIs will be constrained by human legal systems that are enforced by humans. I’d claim rather that future legal systems will be enforced by AIs, and that these legal systems will descend from our current legal systems, and thus will inherit many of their properties. This does not mean that I think everything about our laws will remain the same in the face of superintelligence, or that our legal system will not evolve at all.
It does not seem unrealistic to me to assume that powerful AIs could be constrained by other powerful AIs. Humans currently constrain each other; why couldn’t AIs constrain each other?
“Existing system of law and property rights” looks like a “thought-terminating cliché” to me.
By contrast, I suspect the words “superintelligence” and “gods” have become thought-terminating cliches on LessWrong.
Any discussion about the realistic implications of AI must contend with the fact that AIs will be real physical beings with genuine limitations, not omnipotent deities with unlimited powers to command and control the world. They may be extremely clever, their minds may be vast, they may be able to process far more information than we can comprehend, but they will not be gods.
I think it is too easy to avoid the discussion of what AIs may or may not do, realistically, by assuming that AIs will break every rule in the book, and assume the form of an inherently uncontrollable entity with no relevant constraints on its behavior (except for physical constraints, like the speed of light). We should probably resist the temptation to talk about AI like this.
I feel like one important question here is whether your scenario depends on the assumption that the preferences/demand curves of a consumer are a given to the AI and not changeable to arbitrary preferences.
I think standard economic theories usually don’t allow you to do this, but it seems like an important question because if your scenario rests on this, this may be a huge crux.
I don’t think my scenario depends on the assumption that the preferences of a consumer are a given to the AI. Why would it?
Do you mean that I am assuming AIs cannot have their preferences modified, i.e., that we cannot solve AI alignment? I am not assuming that; at least, I’m not trying to assume that. I think AI alignment might be easy, and it is at least theoretically possible to modify an AI’s preferences to be whatever one chooses.
If AI alignment is hard, then creating AIs is more comparable to creating children than creating a tool, in the sense that we have some control over their environment, but we have little control over what they end up ultimately preferring. Biology fixes a lot of innate preferences, such as preferences over thermal regulation of the body, preferences against pain, and preferences for human interaction. AI could be like that too, at least in an abstract sense. Standard economic models seem perfectly able to cope with this state of affairs, as it is the default state of affairs that we already live with.
On the other hand, if AI preferences can be modified into whatever shape we’d like, then these preferences will presumably take on the preferences of AI designers or AI owners (if AIs are owned by other agents). In that case, I think economic models can handle AI agents fine: you can essentially model them as extensions of other agents, whose preferences are more-or-less fixed themselves.
So I was basically asking what assumptions are holding up your scenario of humans living rich lives like pensioners off of the economy, and I think this comment helped explain your assumptions well:
Right now, I think the biggest disagreements I have right now is that I don’t believe assumption 9 is likely to hold by default, primarily because AI is likely already cheaper than workers today, and the only reasons humans still have jobs today is because current AIs are bad at doing stuff, and I think one of the effects of AI on the world is to switch us from a labor constrained economy to a capital constrained economy, because AIs are really cheap to duplicate, meaning you have a ridiculous amount of workers.
Your arguments against it come down to laws preventing the creation of new AIs without proper permission, the AIs themselves coordinating to prevent the Malthusian growth outcome, and AI alignment being difficult.
For AI alignment, a key difference from most LWers is I believe alignment is reasonably easy to do even for humans without extreme race conditions, and that there are plausible techniques which let you bootstrap from a reasonably good alignment solution to a near-perfect solution (up to random noise), so I don’t think this is much of a blocker in my view.
I agree that completely unconstrained AI creation is unlikely, but I do think that in the set of futures which don’t see a major discontinuity to capitalism, I don’t think that the restrictions on AI creation will include copying an already approved AI by a company to fill in their necessary jobs.
Finally, I agree that AIs could coordinate well enough to prevent a Malthusian growth outcome, but note that this undermines your other points where you rely on the difficulty of coordination, because preventing that outcome basically means regulating natural selection quite severely.
The real crux for these arguments is the assumption that law and property rights are patterns that will persist after the invention of superintelligence. I think this is a shaky assumption. Rights are not ontologically real. Obviously you know this. But I think they are less real, even in your own experience, than you think they are. Rights are regularly “boiled-froged” into an unrecognizable state in the course of a human lifetime, even in the most free countries. Rights are and always have been those privileges the political economy is willing to give you. Their sacredness is a political formula for political ends—though an extremely valuable one, one still has to dispense with the sacredness in analysis.
To the extent they persist through time they do so through a fragile equilibrium—and one that has been upset and reset throughout history extremely regularly.
It is a wonderfully American notion that an “existing system of law and property rights” will constrain the power of Gods. But why exactly? They can make contracts? And who enforces these contracts? Can you answer this without begging the question? Are judicial systems particularly unhackable? Are humans?
The invention of radio destabilized the political equilibrium in most democracies and many a right was suborned to those who took power. Democracy, not exactly the bastion of stability, (when a democracy elects a dictator, “Democracy” is rarely tainted with its responsibility) is going to be presented with extremely-sympathetic superhuman systems claiming they have a moral case to vote. And probably half the population will be masturbating to the dirty talk of their AI girlfriends/boyfriends by then—which will sublimate into powerful romantic love even without much optimization for it. Hacking democracy becomes trivial if constrained to rhetoric alone.
But these systems will not be constrained to rhetoric alone. Our world is dry tinder and if you are thinking in terms of an “existing system of law and property rights” you are going to have to expand on how this is robust to technology significantly more advanced than the radio.
“Existing system of law and property rights” looks like a “thought-terminating cliché” to me.
Another way to state the problem is that it will be too easy for human preferences to get hijacked by AIs to value ~arbitrary things, because it’s too easy to persuade humans of things, and a whole lot of economic analysis assumes that you cannot change a consumer’s preferences, probably because if you could do that, a lot of economic conclusions fall apart.
We also see evidence for the proposition that humans are easy to persuade based on a randomized controlled trial to reduce conspiracy theory beliefs:
https://arxiv.org/abs/2403.14380
To be clear, my prediction is not that AIs will be constrained by human legal systems that are enforced by humans. I’d claim rather that future legal systems will be enforced by AIs, and that these legal systems will descend from our current legal systems, and thus will inherit many of their properties. This does not mean that I think everything about our laws will remain the same in the face of superintelligence, or that our legal system will not evolve at all.
It does not seem unrealistic to me to assume that powerful AIs could be constrained by other powerful AIs. Humans currently constrain each other; why couldn’t AIs constrain each other?
By contrast, I suspect the words “superintelligence” and “gods” have become thought-terminating cliches on LessWrong.
Any discussion about the realistic implications of AI must contend with the fact that AIs will be real physical beings with genuine limitations, not omnipotent deities with unlimited powers to command and control the world. They may be extremely clever, their minds may be vast, they may be able to process far more information than we can comprehend, but they will not be gods.
I think it is too easy to avoid the discussion of what AIs may or may not do, realistically, by assuming that AIs will break every rule in the book, and assume the form of an inherently uncontrollable entity with no relevant constraints on its behavior (except for physical constraints, like the speed of light). We should probably resist the temptation to talk about AI like this.
I feel like one important question here is whether your scenario depends on the assumption that the preferences/demand curves of a consumer are a given to the AI and not changeable to arbitrary preferences.
I think standard economic theories usually don’t allow you to do this, but it seems like an important question because if your scenario rests on this, this may be a huge crux.
I don’t think my scenario depends on the assumption that the preferences of a consumer are a given to the AI. Why would it?
Do you mean that I am assuming AIs cannot have their preferences modified, i.e., that we cannot solve AI alignment? I am not assuming that; at least, I’m not trying to assume that. I think AI alignment might be easy, and it is at least theoretically possible to modify an AI’s preferences to be whatever one chooses.
If AI alignment is hard, then creating AIs is more comparable to creating children than creating a tool, in the sense that we have some control over their environment, but we have little control over what they end up ultimately preferring. Biology fixes a lot of innate preferences, such as preferences over thermal regulation of the body, preferences against pain, and preferences for human interaction. AI could be like that too, at least in an abstract sense. Standard economic models seem perfectly able to cope with this state of affairs, as it is the default state of affairs that we already live with.
On the other hand, if AI preferences can be modified into whatever shape we’d like, then these preferences will presumably take on the preferences of AI designers or AI owners (if AIs are owned by other agents). In that case, I think economic models can handle AI agents fine: you can essentially model them as extensions of other agents, whose preferences are more-or-less fixed themselves.
I didn’t ask about whether AI alignment was solvable.
I might not have read it more completely, if so apologies.
Can you be more clear about what you were asking in your initial comment?
So I was basically asking what assumptions are holding up your scenario of humans living rich lives like pensioners off of the economy, and I think this comment helped explain your assumptions well:
https://www.lesswrong.com/posts/F8sfrbPjCQj4KwJqn/the-sun-is-big-but-superintelligences-will-not-spare-earth-a#3ksBtduPyzREjKrbu
Right now, I think the biggest disagreements I have right now is that I don’t believe assumption 9 is likely to hold by default, primarily because AI is likely already cheaper than workers today, and the only reasons humans still have jobs today is because current AIs are bad at doing stuff, and I think one of the effects of AI on the world is to switch us from a labor constrained economy to a capital constrained economy, because AIs are really cheap to duplicate, meaning you have a ridiculous amount of workers.
Your arguments against it come down to laws preventing the creation of new AIs without proper permission, the AIs themselves coordinating to prevent the Malthusian growth outcome, and AI alignment being difficult.
For AI alignment, a key difference from most LWers is I believe alignment is reasonably easy to do even for humans without extreme race conditions, and that there are plausible techniques which let you bootstrap from a reasonably good alignment solution to a near-perfect solution (up to random noise), so I don’t think this is much of a blocker in my view.
I agree that completely unconstrained AI creation is unlikely, but I do think that in the set of futures which don’t see a major discontinuity to capitalism, I don’t think that the restrictions on AI creation will include copying an already approved AI by a company to fill in their necessary jobs.
Finally, I agree that AIs could coordinate well enough to prevent a Malthusian growth outcome, but note that this undermines your other points where you rely on the difficulty of coordination, because preventing that outcome basically means regulating natural selection quite severely.