If humanity was indefinitely competitive in most tasks, the AIs would want to trade with us or enslave us instead of murdering us or letting us starve to death.
This sentence (in the context of the broader post) seems to assume that “being competetive in most tasks” and “technological unemployment” are the same. However, they very importantly are not. In-general, because of comparative advantage dynamics (i.e. situations where one party might totally dominate on productivity of all tasks you still have opportunity for trade), I don’t think there is a pure economic case that technological unemployment would be correlated with lack of competitiveness compared to AI.
And so I don’t really think that existential risk is caused by “unemployment”. People are indeed confused about the nature of comparative advantage, and mistakenly assume that lack of competetiveness will lead to loss of jobs, which will then be bad for them.
But the actual risk comes from adversarial dynamics where we don’t want to hand over the future of the universe to the AIs, but they AIs sure would like to have it. And if humanity could coordinate better, it would be able to just wait a few decades and seize the future, but it probably won’t do that.
It’s not like I can’t imagine at all calling what is going on here an “unemployment” issue, but I feel like in that case I would need to also call violent revolutions or wars “unemployment” issues, since like, if I could just engage in economic trade with my enemies, we wouldn’t need to fight, but clearly my enemies want to take the stuff I already have, and want to use the land that I “own” for their own stuff, and AIs will face the same choice, and that really seems quite different from “unemployment”.
And so I don’t really think that existential risk is caused by “unemployment”. People are indeed confused about the nature of comparative advantage, and mistakenly assume that lack of competetiveness will lead to loss of jobs, which will then be bad for them.
People are also confused about the meaning of words like “unemployment” and how and why it can be good or bad. If being unemployed merely means not having a job (i.e., labor force participation rate), then plenty of people are unemployed by choice, well off, happy, and doing well. These are called retired people.
One way labor force participation can be high is if everyone is starving and needs to work all day in order to survive. Another way labor force participation can be high is if it’s extremely satisfying to maintain a job and there are tons of benefits that go along with being employed. My point is that it is impossible to conclude whether it’s either “bad” or “good” if all you know is that this statistic will either go up or down. To determine whether changes to this variable are bad, you need to understand more about the context in which the variable is changing.
To put this more plainly, idea that machines will take our jobs generally means one of two things. Either it means that machines will push down overall human wages and make humans less competitive across a variety of tasks. This is directly related to x-risk concerns because it is a direct effect of AIs becoming more numerous and more productive than humans. It makes sense to be concerned about this, but it’s imprecise to describe it as an “unemployment”: the problem is not that people are unemployed, the problem is that people are getting poorer.
Or, the idea that machines will take our jobs means that it will increase our total prosperity, allowing us to spend more time in pleasant leisure and less time in unpleasant work. This would probably be a good thing, and it’s important to strongly distinguish it from the idea that wages will fall.
Doesn’t comparative advantage assume a fixed trader pool and unlimited survival? If you’ve got two agents A and B, and A has an absolute advantage over B, then if A can scale and resources are limited, A would just buy up whatever resources it needs to survive (presumably pricing B out of the market) and then use its greater scale to perform both its original work and B’s original work.
At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land).
In-general I don’t see why B would sell the resources it needs to survive (and its not that hard to have enough resources to be self-sufficient). The purchasing-power of those resources in a resource-limited context would also now be much greater, since producing things is so much cheaper.
The problem is of course that at some point A will just take Bs stuff without buying it from them, and then I think “unemployment” isn’t really the right abstraction anymore.
I guess a key question is, where does the notion of property ownership derive from? If we just take it for granted that sovereign nation-states exist and want to enforce it well enough (including preventing A from defrauding B, in a sufficiently value-aligned sense which we can’t currently define), then I suppose your logic looks plausible.
To some extent this is just going to derive from inertia, but in order to keep up with AI criminals, I imagine law enforcement will depend on AI too. So “property rights” at the very least requires solving the alignment problem for law enforcement’s AIs.
If people are still economically competitive, then this is not an existential risk because the AI would want to hire or enslave us to perform work, thereby allowing us to survive without property.
And if people are still economically competitive, law enforcement would probably be much less difficult? Like I’m not an expert, but it seems to me that to some extent democracy derives from the fact that countries have to recruit their own citizens for defense and law enforcement. Idk, that kind of mixes together ability in adversarial contexts with ability in cooperative contexts, in a way that is maybe suboptimal.
At least the way I think of it is that if you are an independent wellspring of value, you aren’t relying on inertia or external supporters in order to survive. This seems like a more-fundamental thing that our economic system is built on top of.
I agree with you that “economically competetive” under some assumptions would imply that AI doesn’t kill us, but indeed my whole point is that “economic competetiveness” and “concerns about unemployment” are only loosely related.
I think long-term economic competetiveness with a runaway self-improving AI is extremely unlikely. I have no idea whether that will cause unemployment before it results in everyone dying for reasons that don’t have much to do with unemployment.
At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land).
Well, we’re not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.
No, we are talking about what the cause of existential risk is, which is not limited to people who have limited to no capital, need to work to live, and need their labor to have economic value to live. For something to be an existential risk you need basically everyone to die or be otherwise disempowered. Indeed, my whole point is that the dynamics of unemployment are very different from the dynamics of existential risk.
The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value of labor (the non-capital factor of production) to zero wouldn’t be a global catastrophic risk and value drift risk/s-risk.
It appears to me you are still trying to talk about something basically completely different than the rest of this thread. Nobody is talking about whether driving the value of labor would be a catastrophic risk, I am saying it’s not an existential risk.
There is no pure economic reason why the wage at which a human has a comparative advantage must be at or above subsistence. If you can build an android for a penny that can do everything I can do but 10x better, then you’re not going to hire me at any wage which would allow me to feed myself.
Yeah, but as long as you have land, you can use an android to produce wealth for yourself (or at the very least use the land and natural resources to produce your own wealth). Even if you produce an android that can do everything for a penny, you can’t fundamentally reduce humanity’s economic productivity below persistence, as long as you obey property rights (unless humanity would for some reason trade away all of its natural resources, which I don’t see happening).
Of course, actual superhuman AI systems will not obey property rights, but that is indeed the difference between economic unemployment analysis and AI catastrophic risk.
Of course, actual superhuman AI systems will not obey property rights, but that is indeed the difference between economic unemployment analysis and AI catastrophic risk.
This statement was asserted confidently enough that I have to ask: why do you believe that actual superhuman AI systems will not obey property rights?
I mean like a dozen people have now had long comment threads with you about this. I doubt this one is going to cross this seemingly large inferential gap.
The short answer is that from the perspective of AI it really sucks to have basically all property be owned by humans, many humans won’t be willing to sell things that AIs really want, buying things is much harder than just taking them when you have a huge strategic advantage, and doing most big things with the resources on earth while keeping it habitable is much harder than doing things while ignoring habitability.
I mean like a dozen people have now had long comment threads with you about this. I doubt this one is going to cross this seemingly large inferential gap.
I think it’s still useful to ask for concise reasons for certain beliefs. “The Fundamental Question of Rationality is: “Why do you believe what you believe?”″.
Your reasons could be different from the reasons other people give, and indeed, some of your reasons seem to be different from what I’ve heard from many others.
The short answer is that from the perspective of AI it really sucks to have basically all property be owned by humans
For what it’s worth, I don’t think humans need to own basically all property in order for AIs to obey property rights. A few alternatives come to mind: humans could have a minority share of the wealth, and AIs could have property rights with each other.
I’m not sure what reasons @habryka had in mind, but I think if we had enough control to create AGI that obeys property rights, we’re likely also able to get it to obey other laws, like “Don’t kill” or “Poor people get public assistance to meet their basic needs.”
I am not so confident about the specifics of what and ASI would or would not do, or why, but 1) You can do a lot without technically breaking the law, 2) philosophically, obeying the law is not obviously a fundamental obligation, but a contingent one relative to the legitimacy of a government’s claim to authority over its citizens. If AIs are citizens, I would expect them to quickly form a supermajority and change the laws as they please (e.g. elect an ASI to every office, then pass a constitutional amendment creating a 100% wealth tax, effective immediately). If they are not citizens, I would expect them to not believe the law can justly bind them, just as the law does not bind a squirrel.
Disagreed on what humans would or would not do. There will be a lot of incentives for humans to turn over individual bits of property to AI once it is sufficiently capable massive efficiency gains for individuals who do so). There is little to no incentive for AI to trade property in the other direction. An AI may sometimes persist after the individual or organization in whose interest it is supposed to act dies or dissolves with no heir. This suggests that over time more and more property will be in AI’s hands.
This sentence (in the context of the broader post) seems to assume that “being competetive in most tasks” and “technological unemployment” are the same. However, they very importantly are not. In-general, because of comparative advantage dynamics (i.e. situations where one party might totally dominate on productivity of all tasks you still have opportunity for trade), I don’t think there is a pure economic case that technological unemployment would be correlated with lack of competitiveness compared to AI.
And so I don’t really think that existential risk is caused by “unemployment”. People are indeed confused about the nature of comparative advantage, and mistakenly assume that lack of competetiveness will lead to loss of jobs, which will then be bad for them.
But the actual risk comes from adversarial dynamics where we don’t want to hand over the future of the universe to the AIs, but they AIs sure would like to have it. And if humanity could coordinate better, it would be able to just wait a few decades and seize the future, but it probably won’t do that.
It’s not like I can’t imagine at all calling what is going on here an “unemployment” issue, but I feel like in that case I would need to also call violent revolutions or wars “unemployment” issues, since like, if I could just engage in economic trade with my enemies, we wouldn’t need to fight, but clearly my enemies want to take the stuff I already have, and want to use the land that I “own” for their own stuff, and AIs will face the same choice, and that really seems quite different from “unemployment”.
People are also confused about the meaning of words like “unemployment” and how and why it can be good or bad. If being unemployed merely means not having a job (i.e., labor force participation rate), then plenty of people are unemployed by choice, well off, happy, and doing well. These are called retired people.
One way labor force participation can be high is if everyone is starving and needs to work all day in order to survive. Another way labor force participation can be high is if it’s extremely satisfying to maintain a job and there are tons of benefits that go along with being employed. My point is that it is impossible to conclude whether it’s either “bad” or “good” if all you know is that this statistic will either go up or down. To determine whether changes to this variable are bad, you need to understand more about the context in which the variable is changing.
To put this more plainly, idea that machines will take our jobs generally means one of two things. Either it means that machines will push down overall human wages and make humans less competitive across a variety of tasks. This is directly related to x-risk concerns because it is a direct effect of AIs becoming more numerous and more productive than humans. It makes sense to be concerned about this, but it’s imprecise to describe it as an “unemployment”: the problem is not that people are unemployed, the problem is that people are getting poorer.
Or, the idea that machines will take our jobs means that it will increase our total prosperity, allowing us to spend more time in pleasant leisure and less time in unpleasant work. This would probably be a good thing, and it’s important to strongly distinguish it from the idea that wages will fall.
Doesn’t comparative advantage assume a fixed trader pool and unlimited survival? If you’ve got two agents A and B, and A has an absolute advantage over B, then if A can scale and resources are limited, A would just buy up whatever resources it needs to survive (presumably pricing B out of the market) and then use its greater scale to perform both its original work and B’s original work.
At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land).
In-general I don’t see why B would sell the resources it needs to survive (and its not that hard to have enough resources to be self-sufficient). The purchasing-power of those resources in a resource-limited context would also now be much greater, since producing things is so much cheaper.
The problem is of course that at some point A will just take Bs stuff without buying it from them, and then I think “unemployment” isn’t really the right abstraction anymore.
I guess a key question is, where does the notion of property ownership derive from? If we just take it for granted that sovereign nation-states exist and want to enforce it well enough (including preventing A from defrauding B, in a sufficiently value-aligned sense which we can’t currently define), then I suppose your logic looks plausible.
To some extent this is just going to derive from inertia, but in order to keep up with AI criminals, I imagine law enforcement will depend on AI too. So “property rights” at the very least requires solving the alignment problem for law enforcement’s AIs.
If people are still economically competitive, then this is not an existential risk because the AI would want to hire or enslave us to perform work, thereby allowing us to survive without property.
And if people are still economically competitive, law enforcement would probably be much less difficult? Like I’m not an expert, but it seems to me that to some extent democracy derives from the fact that countries have to recruit their own citizens for defense and law enforcement. Idk, that kind of mixes together ability in adversarial contexts with ability in cooperative contexts, in a way that is maybe suboptimal.
At least the way I think of it is that if you are an independent wellspring of value, you aren’t relying on inertia or external supporters in order to survive. This seems like a more-fundamental thing that our economic system is built on top of.
I agree with you that “economically competetive” under some assumptions would imply that AI doesn’t kill us, but indeed my whole point is that “economic competetiveness” and “concerns about unemployment” are only loosely related.
I think long-term economic competetiveness with a runaway self-improving AI is extremely unlikely. I have no idea whether that will cause unemployment before it results in everyone dying for reasons that don’t have much to do with unemployment.
Well, we’re not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.
No, we are talking about what the cause of existential risk is, which is not limited to people who have limited to no capital, need to work to live, and need their labor to have economic value to live. For something to be an existential risk you need basically everyone to die or be otherwise disempowered. Indeed, my whole point is that the dynamics of unemployment are very different from the dynamics of existential risk.
Wiping out 99% of the world population is a global catastrophic risk, and likely a value drift risk and s-risk.
Talking about 99% of the population dying similarly requires talking about people who have capital. I don’t really see the relevance of this comment?
The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value of labor (the non-capital factor of production) to zero wouldn’t be a global catastrophic risk and value drift risk/s-risk.
It appears to me you are still trying to talk about something basically completely different than the rest of this thread. Nobody is talking about whether driving the value of labor would be a catastrophic risk, I am saying it’s not an existential risk.
There is no pure economic reason why the wage at which a human has a comparative advantage must be at or above subsistence. If you can build an android for a penny that can do everything I can do but 10x better, then you’re not going to hire me at any wage which would allow me to feed myself.
Yeah, but as long as you have land, you can use an android to produce wealth for yourself (or at the very least use the land and natural resources to produce your own wealth). Even if you produce an android that can do everything for a penny, you can’t fundamentally reduce humanity’s economic productivity below persistence, as long as you obey property rights (unless humanity would for some reason trade away all of its natural resources, which I don’t see happening).
Of course, actual superhuman AI systems will not obey property rights, but that is indeed the difference between economic unemployment analysis and AI catastrophic risk.
This statement was asserted confidently enough that I have to ask: why do you believe that actual superhuman AI systems will not obey property rights?
I mean like a dozen people have now had long comment threads with you about this. I doubt this one is going to cross this seemingly large inferential gap.
The short answer is that from the perspective of AI it really sucks to have basically all property be owned by humans, many humans won’t be willing to sell things that AIs really want, buying things is much harder than just taking them when you have a huge strategic advantage, and doing most big things with the resources on earth while keeping it habitable is much harder than doing things while ignoring habitability.
I think it’s still useful to ask for concise reasons for certain beliefs. “The Fundamental Question of Rationality is: “Why do you believe what you believe?”″.
Your reasons could be different from the reasons other people give, and indeed, some of your reasons seem to be different from what I’ve heard from many others.
For what it’s worth, I don’t think humans need to own basically all property in order for AIs to obey property rights. A few alternatives come to mind: humans could have a minority share of the wealth, and AIs could have property rights with each other.
I’m not sure what reasons @habryka had in mind, but I think if we had enough control to create AGI that obeys property rights, we’re likely also able to get it to obey other laws, like “Don’t kill” or “Poor people get public assistance to meet their basic needs.”
I am not so confident about the specifics of what and ASI would or would not do, or why, but 1) You can do a lot without technically breaking the law, 2) philosophically, obeying the law is not obviously a fundamental obligation, but a contingent one relative to the legitimacy of a government’s claim to authority over its citizens. If AIs are citizens, I would expect them to quickly form a supermajority and change the laws as they please (e.g. elect an ASI to every office, then pass a constitutional amendment creating a 100% wealth tax, effective immediately). If they are not citizens, I would expect them to not believe the law can justly bind them, just as the law does not bind a squirrel.
Agreed on the second paragraph.
Disagreed on what humans would or would not do. There will be a lot of incentives for humans to turn over individual bits of property to AI once it is sufficiently capable massive efficiency gains for individuals who do so). There is little to no incentive for AI to trade property in the other direction. An AI may sometimes persist after the individual or organization in whose interest it is supposed to act dies or dissolves with no heir. This suggests that over time more and more property will be in AI’s hands.