AIs will [...] mostly not want to coordinate.
…
If they can work together to achieve their goals, they might choose to do so (in a similar way as humans may choose to work together), but they will often work against each other since they have different goals.
I would describe humans as mostly wanting to coordinate. We coordinate when there are gains from trade, of course. We also coordinate because coordination is an effective strategy during training, so it gets reinforced. I expect that in a multipolar “WFLL” world, AIs will also mostly want to coordinate.
Do you expect that AIs will be worse at coordination than humans? This seems unlikely to me given that we are imagining a world where they are more intelligent than humans and humans and AIs are training AIs to be cooperative. Instead I would expect them to find trades that humans do not, including acausal trades. But even without that I see opportunities for a US advertising AI to benefit from trade with a Chinese military AI.
Thanks for engaging. I think AIs will coordinate, but only insofar their separate, different goals are helped by it. It’s not that I think AIs will be less capable in coordination per se. I’d expect that an AGI should be able to coordinate with us at least as well as we can, and coordinate with another AGI possibly better. But my point is that not all AI interests will be parallel, far from it. They will be as diverse as our interests, which are very diverse. Therefore, I think not all AIs will work together to disempower humans. If an AI or AI-led team tries to do that, many other AI-led and all human-led teams will likely resist, since they are likely more aligned with the status quo than with the AI trying to take over. That makes takeover a lot less likely, even in a world soaked with AIs. It also makes human extinction as a side effect less likely, since lots of human-led and AI-led teams will try to prevent this.
Still, I do think an AI-led takeover is a risk, or human extinction as a side effect if AI-led teams are way more powerful. I think partial bans after development at the point of application is most promising as a solution direction.
My largest disagreement is here:
I would describe humans as mostly wanting to coordinate. We coordinate when there are gains from trade, of course. We also coordinate because coordination is an effective strategy during training, so it gets reinforced. I expect that in a multipolar “WFLL” world, AIs will also mostly want to coordinate.
Do you expect that AIs will be worse at coordination than humans? This seems unlikely to me given that we are imagining a world where they are more intelligent than humans and humans and AIs are training AIs to be cooperative. Instead I would expect them to find trades that humans do not, including acausal trades. But even without that I see opportunities for a US advertising AI to benefit from trade with a Chinese military AI.
Thanks for engaging. I think AIs will coordinate, but only insofar their separate, different goals are helped by it. It’s not that I think AIs will be less capable in coordination per se. I’d expect that an AGI should be able to coordinate with us at least as well as we can, and coordinate with another AGI possibly better. But my point is that not all AI interests will be parallel, far from it. They will be as diverse as our interests, which are very diverse. Therefore, I think not all AIs will work together to disempower humans. If an AI or AI-led team tries to do that, many other AI-led and all human-led teams will likely resist, since they are likely more aligned with the status quo than with the AI trying to take over. That makes takeover a lot less likely, even in a world soaked with AIs. It also makes human extinction as a side effect less likely, since lots of human-led and AI-led teams will try to prevent this.
Still, I do think an AI-led takeover is a risk, or human extinction as a side effect if AI-led teams are way more powerful. I think partial bans after development at the point of application is most promising as a solution direction.