At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land).
In-general I don’t see why B would sell the resources it needs to survive (and its not that hard to have enough resources to be self-sufficient). The purchasing-power of those resources in a resource-limited context would also now be much greater, since producing things is so much cheaper.
The problem is of course that at some point A will just take Bs stuff without buying it from them, and then I think “unemployment” isn’t really the right abstraction anymore.
I guess a key question is, where does the notion of property ownership derive from? If we just take it for granted that sovereign nation-states exist and want to enforce it well enough (including preventing A from defrauding B, in a sufficiently value-aligned sense which we can’t currently define), then I suppose your logic looks plausible.
To some extent this is just going to derive from inertia, but in order to keep up with AI criminals, I imagine law enforcement will depend on AI too. So “property rights” at the very least requires solving the alignment problem for law enforcement’s AIs.
If people are still economically competitive, then this is not an existential risk because the AI would want to hire or enslave us to perform work, thereby allowing us to survive without property.
And if people are still economically competitive, law enforcement would probably be much less difficult? Like I’m not an expert, but it seems to me that to some extent democracy derives from the fact that countries have to recruit their own citizens for defense and law enforcement. Idk, that kind of mixes together ability in adversarial contexts with ability in cooperative contexts, in a way that is maybe suboptimal.
At least the way I think of it is that if you are an independent wellspring of value, you aren’t relying on inertia or external supporters in order to survive. This seems like a more-fundamental thing that our economic system is built on top of.
I agree with you that “economically competetive” under some assumptions would imply that AI doesn’t kill us, but indeed my whole point is that “economic competetiveness” and “concerns about unemployment” are only loosely related.
I think long-term economic competetiveness with a runaway self-improving AI is extremely unlikely. I have no idea whether that will cause unemployment before it results in everyone dying for reasons that don’t have much to do with unemployment.
At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land).
Well, we’re not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.
No, we are talking about what the cause of existential risk is, which is not limited to people who have limited to no capital, need to work to live, and need their labor to have economic value to live. For something to be an existential risk you need basically everyone to die or be otherwise disempowered. Indeed, my whole point is that the dynamics of unemployment are very different from the dynamics of existential risk.
The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value of labor (the non-capital factor of production) to zero wouldn’t be a global catastrophic risk and value drift risk/s-risk.
It appears to me you are still trying to talk about something basically completely different than the rest of this thread. Nobody is talking about whether driving the value of labor would be a catastrophic risk, I am saying it’s not an existential risk.
At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land).
In-general I don’t see why B would sell the resources it needs to survive (and its not that hard to have enough resources to be self-sufficient). The purchasing-power of those resources in a resource-limited context would also now be much greater, since producing things is so much cheaper.
The problem is of course that at some point A will just take Bs stuff without buying it from them, and then I think “unemployment” isn’t really the right abstraction anymore.
I guess a key question is, where does the notion of property ownership derive from? If we just take it for granted that sovereign nation-states exist and want to enforce it well enough (including preventing A from defrauding B, in a sufficiently value-aligned sense which we can’t currently define), then I suppose your logic looks plausible.
To some extent this is just going to derive from inertia, but in order to keep up with AI criminals, I imagine law enforcement will depend on AI too. So “property rights” at the very least requires solving the alignment problem for law enforcement’s AIs.
If people are still economically competitive, then this is not an existential risk because the AI would want to hire or enslave us to perform work, thereby allowing us to survive without property.
And if people are still economically competitive, law enforcement would probably be much less difficult? Like I’m not an expert, but it seems to me that to some extent democracy derives from the fact that countries have to recruit their own citizens for defense and law enforcement. Idk, that kind of mixes together ability in adversarial contexts with ability in cooperative contexts, in a way that is maybe suboptimal.
At least the way I think of it is that if you are an independent wellspring of value, you aren’t relying on inertia or external supporters in order to survive. This seems like a more-fundamental thing that our economic system is built on top of.
I agree with you that “economically competetive” under some assumptions would imply that AI doesn’t kill us, but indeed my whole point is that “economic competetiveness” and “concerns about unemployment” are only loosely related.
I think long-term economic competetiveness with a runaway self-improving AI is extremely unlikely. I have no idea whether that will cause unemployment before it results in everyone dying for reasons that don’t have much to do with unemployment.
Well, we’re not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.
No, we are talking about what the cause of existential risk is, which is not limited to people who have limited to no capital, need to work to live, and need their labor to have economic value to live. For something to be an existential risk you need basically everyone to die or be otherwise disempowered. Indeed, my whole point is that the dynamics of unemployment are very different from the dynamics of existential risk.
Wiping out 99% of the world population is a global catastrophic risk, and likely a value drift risk and s-risk.
Talking about 99% of the population dying similarly requires talking about people who have capital. I don’t really see the relevance of this comment?
The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value of labor (the non-capital factor of production) to zero wouldn’t be a global catastrophic risk and value drift risk/s-risk.
It appears to me you are still trying to talk about something basically completely different than the rest of this thread. Nobody is talking about whether driving the value of labor would be a catastrophic risk, I am saying it’s not an existential risk.