Humans have a long history of domination over other humans, even with somewhat significant differences in intelligence. Geniuses pay taxes and follow laws made and enforced by merely competent people.
We have no clue if bigger differences in reasoning power are possible to overcome or not, and we have no reason to believe that the social and legal/threat pressure which works for humans (even very smart sociopaths) will have any effect on an AI.
If superintelligence is general and includes any compatible ideas of preferences and identity, it seems to me that they ARE people, and we should care about them at least as much as humans. If it’s more … alien … than that, and I suspect it will be, then it’s not clear that coexistence is long-term feasible, let alone dominance of biologicals.
I mean, should you expand your model of human groups being superintelligent, and apply that to the question of how humans can dominate an AI superintelligence?
Humans have a long history of domination over other humans, even with somewhat significant differences in intelligence. Geniuses pay taxes and follow laws made and enforced by merely competent people.
We have no clue if bigger differences in reasoning power are possible to overcome or not, and we have no reason to believe that the social and legal/threat pressure which works for humans (even very smart sociopaths) will have any effect on an AI.
If superintelligence is general and includes any compatible ideas of preferences and identity, it seems to me that they ARE people, and we should care about them at least as much as humans. If it’s more … alien … than that, and I suspect it will be, then it’s not clear that coexistence is long-term feasible, let alone dominance of biologicals.
Many ‘merely competent people’ assembled together and organized towards a common goal seems to qualify as a super intelligence.
I kind of agree. Is that worthy of a top-level answer to this question?
What do you mean by ‘worthy’?
I mean, should you expand your model of human groups being superintelligent, and apply that to the question of how humans can dominate an AI superintelligence?