But… shouldn’t this mean you expect AGI civilization to totally dominate human civilization? They can read each other’s source code, and thus trust much more deeply! They can transmit information between them at immense bandwidths! They can clone their minds and directly learn from each other’s experiences!
I don’t think it’s obvious that this means that AGI is more dangerous, because it means that for a fixed total impact of AGI, the AGI doesn’t have to be as competent at individual thinking (because it leans relatively more on group thinking). And so at the point where the AGIs are becoming very powerful in aggregate, this argument pushes us away from thinking they’re good at individual thinking.
Also, it’s not obvious that early AIs will actually be able to do this if their creators don’t find a way to train them to have this affordance. ML doesn’t currently normally make AIs which can helpfully share mind-states, and it probably requires non-trivial effort to hook them up correctly to be able to share mind-state.
I don’t think it’s obvious that this means that AGI is more dangerous, because it means that for a fixed total impact of AGI, the AGI doesn’t have to be as competent at individual thinking (because it leans relatively more on group thinking). And so at the point where the AGIs are becoming very powerful in aggregate, this argument pushes us away from thinking they’re good at individual thinking.
Also, it’s not obvious that early AIs will actually be able to do this if their creators don’t find a way to train them to have this affordance. ML doesn’t currently normally make AIs which can helpfully share mind-states, and it probably requires non-trivial effort to hook them up correctly to be able to share mind-state.