Why worry about what happens under a capitalist system, when very powerful AI that didn’t like the outcomes of such a system would probably just remove it entirely, or reconstitute it in a way that didn’t recognize human property rights to begin with? I mean, the existing system doesn’t recognize AI property rights at all, and you seem to assume that the AI would have the leverage to change that.
For that matter, even if you had a post-AGI or post-ASI system where humans owned all of the capital, it’s almost certain that that ownership would be distributed in a way that most humans would feel was grossly unfair. So the humans would also be trying to change the system.
The elimination of capitalism seems like it’s very much on the minor end of the possible range of post-ASI changes.
Very powerful AIs may very well be created in order to defend the current capitalist system.
Like the most plausible proposal for what distinguishes bounded tool AI vs dangerous AI is that dangerous AI does adversarial/minimax-like reasoning whereas bounded tool AI mostly just assumes the world will allow it to do whatever it tries, so it doesn’t need to try very hard.
This means the main people who will be forced to create dangerous AI are the ones working in hardcore adversarial contexts, which will especially be the military and police (as well as their opponents, including rogue states and gangsters). But the military and police have as their primary goal to maintain the current system.
Why worry about what happens under a capitalist system, when very powerful AI that didn’t like the outcomes of such a system would probably just remove it entirely, or reconstitute it in a way that didn’t recognize human property rights to begin with? I mean, the existing system doesn’t recognize AI property rights at all, and you seem to assume that the AI would have the leverage to change that.
For that matter, even if you had a post-AGI or post-ASI system where humans owned all of the capital, it’s almost certain that that ownership would be distributed in a way that most humans would feel was grossly unfair. So the humans would also be trying to change the system.
The elimination of capitalism seems like it’s very much on the minor end of the possible range of post-ASI changes.
Very powerful AIs may very well be created in order to defend the current capitalist system.
Like the most plausible proposal for what distinguishes bounded tool AI vs dangerous AI is that dangerous AI does adversarial/minimax-like reasoning whereas bounded tool AI mostly just assumes the world will allow it to do whatever it tries, so it doesn’t need to try very hard.
This means the main people who will be forced to create dangerous AI are the ones working in hardcore adversarial contexts, which will especially be the military and police (as well as their opponents, including rogue states and gangsters). But the military and police have as their primary goal to maintain the current system.