Have you considered the specific mechanism that I proposed, and if so what do you find implausible about it? (If not, see this longer post or this shorter comment.)
I did manage to find a quote from you that perhaps explains most of our disagreement on this specific mechanism:
There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination.
Can you elaborate on what these other factors are? It seems to me that most coordination costs in the real world come from value differences, so it’s puzzling to see you write this.
Abstracting away from the specific mechanism, as a more general argument, AI designers or evolution will (sooner or later) be able to explore a much larger region of mind design space than biological evolution could. Within this region there are bound to be minds much better at coordination than humans, and we should certainly expect coordination ability to be one objective that AI designers or evolution will optimize for since it offers a significant competitive advantage.
This doesn’t guarantee that the designs that end up “winning” will have much better coordination ability than humans because maybe the designers/evolution will be forced to trade off coordination ability for something else they value, to the extent that the “winner” don’t coordinate much better than humans, but that doesn’t seem like something we should expect to happen by default, without some specific reason to, and it becomes less and less likely as more and more of mind design space is explored.
Have you considered the specific mechanism that I proposed, and if so what do you find implausible about it? (If not, see this longer post or this shorter comment.)
I did manage to find a quote from you that perhaps explains most of our disagreement on this specific mechanism:
Can you elaborate on what these other factors are? It seems to me that most coordination costs in the real world come from value differences, so it’s puzzling to see you write this.
Abstracting away from the specific mechanism, as a more general argument, AI designers or evolution will (sooner or later) be able to explore a much larger region of mind design space than biological evolution could. Within this region there are bound to be minds much better at coordination than humans, and we should certainly expect coordination ability to be one objective that AI designers or evolution will optimize for since it offers a significant competitive advantage.
This doesn’t guarantee that the designs that end up “winning” will have much better coordination ability than humans because maybe the designers/evolution will be forced to trade off coordination ability for something else they value, to the extent that the “winner” don’t coordinate much better than humans, but that doesn’t seem like something we should expect to happen by default, without some specific reason to, and it becomes less and less likely as more and more of mind design space is explored.