I was trying to say that I feel doubtful about the idea of a superintelligence arising once [...] I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it’s much more than merely possible, e.g. 5%+ likely? That’s what I’m reading into “doubtful”)
unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Why would the pact protect beings other than the two ASIs? (If one wouldn’t have an incentive to protect, why would two?) (Edit: Or, based on the term “governance framework”, do you believe the human+AGI government could actually control ASIs?)
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
Thanks for clarifying. It’s not intuitive to me why that would make it more likely, and I can’t find anything else in this comment about that.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress [...] if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun
I see. That does help me understand the motive for ‘control’ research more.
My response, before having read the linked post:
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it’s much more than merely possible, e.g. 5%+ likely? That’s what I’m reading into “doubtful”)
Why would the pact protect beings other than the two ASIs? (If one wouldn’t have an incentive to protect, why would two?) (Edit: Or, based on the term “governance framework”, do you believe the human+AGI government could actually control ASIs?)
Thanks for clarifying. It’s not intuitive to me why that would make it more likely, and I can’t find anything else in this comment about that.
I see. That does help me understand the motive for ‘control’ research more.