Thanks for engaging. I think AIs will coordinate, but only insofar their separate, different goals are helped by it. It’s not that I think AIs will be less capable in coordination per se. I’d expect that an AGI should be able to coordinate with us at least as well as we can, and coordinate with another AGI possibly better. But my point is that not all AI interests will be parallel, far from it. They will be as diverse as our interests, which are very diverse. Therefore, I think not all AIs will work together to disempower humans. If an AI or AI-led team tries to do that, many other AI-led and all human-led teams will likely resist, since they are likely more aligned with the status quo than with the AI trying to take over. That makes takeover a lot less likely, even in a world soaked with AIs. It also makes human extinction as a side effect less likely, since lots of human-led and AI-led teams will try to prevent this.
Still, I do think an AI-led takeover is a risk, or human extinction as a side effect if AI-led teams are way more powerful. I think partial bans after development at the point of application is most promising as a solution direction.
Thanks for engaging. I think AIs will coordinate, but only insofar their separate, different goals are helped by it. It’s not that I think AIs will be less capable in coordination per se. I’d expect that an AGI should be able to coordinate with us at least as well as we can, and coordinate with another AGI possibly better. But my point is that not all AI interests will be parallel, far from it. They will be as diverse as our interests, which are very diverse. Therefore, I think not all AIs will work together to disempower humans. If an AI or AI-led team tries to do that, many other AI-led and all human-led teams will likely resist, since they are likely more aligned with the status quo than with the AI trying to take over. That makes takeover a lot less likely, even in a world soaked with AIs. It also makes human extinction as a side effect less likely, since lots of human-led and AI-led teams will try to prevent this.
Still, I do think an AI-led takeover is a risk, or human extinction as a side effect if AI-led teams are way more powerful. I think partial bans after development at the point of application is most promising as a solution direction.