I think there are a lot of places where we agree. In this comment I was trying to say that I feel doubtful about the idea of a superintelligence arising once, and then no other superintelligences arise because the first one had time to fully seize control of the world. I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
I think the offense-dominant nature of our current technological milieu means that humanity is almost certainly toast under the multipolar superintelligence scenario unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Responses:
Sure (if ‘shaping’ is merely ‘having a causal effect on’, not necessarily in the hoped-for direction).
Yes, that’s what I meant. Control seems like not-at-all a default scenario to me. More like the accelerating self-improving AI process is a boulder tumbling down a hill, and humanity is a stone in its path that may alter its trajectory (while likely being destroyed in the process).
a more multi-polar community of AIs
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
I’m pretty sure we’re on a fast-track to either superintelligence-within-ten-years or civilizational collapse (e.g. large scale nuclear war). I doubt very much that any governance effort will manage to delay superintelligence for more than 10 years from now.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress, not on attempts to pause/delay. I think that algorithmic advance is the most dangerous piece of the puzzle, and wouldn’t be much hindered by restrictions on large training runs (which is what people often mean when talking of delay).
But, if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun. Then at that point, we could delay, and focus on more robust alignment (including value-alignment rather than just intent-alignment) and on human augmentation / digital people.
I was trying to say that I feel doubtful about the idea of a superintelligence arising once [...] I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it’s much more than merely possible, e.g. 5%+ likely? That’s what I’m reading into “doubtful”)
unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Why would the pact protect beings other than the two ASIs? (If one wouldn’t have an incentive to protect, why would two?) (Edit: Or, based on the term “governance framework”, do you believe the human+AGI government could actually control ASIs?)
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
Thanks for clarifying. It’s not intuitive to me why that would make it more likely, and I can’t find anything else in this comment about that.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress [...] if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun
I see. That does help me understand the motive for ‘control’ research more.
I think there are a lot of places where we agree. In this comment I was trying to say that I feel doubtful about the idea of a superintelligence arising once, and then no other superintelligences arise because the first one had time to fully seize control of the world. I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
I think the offense-dominant nature of our current technological milieu means that humanity is almost certainly toast under the multipolar superintelligence scenario unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Responses:
Yes, that’s what I meant. Control seems like not-at-all a default scenario to me. More like the accelerating self-improving AI process is a boulder tumbling down a hill, and humanity is a stone in its path that may alter its trajectory (while likely being destroyed in the process).
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
I’m pretty sure we’re on a fast-track to either superintelligence-within-ten-years or civilizational collapse (e.g. large scale nuclear war). I doubt very much that any governance effort will manage to delay superintelligence for more than 10 years from now.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress, not on attempts to pause/delay. I think that algorithmic advance is the most dangerous piece of the puzzle, and wouldn’t be much hindered by restrictions on large training runs (which is what people often mean when talking of delay).
But, if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun. Then at that point, we could delay, and focus on more robust alignment (including value-alignment rather than just intent-alignment) and on human augmentation / digital people.
I talk more about my thoughts on this in my post here: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy
My response, before having read the linked post:
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it’s much more than merely possible, e.g. 5%+ likely? That’s what I’m reading into “doubtful”)
Why would the pact protect beings other than the two ASIs? (If one wouldn’t have an incentive to protect, why would two?) (Edit: Or, based on the term “governance framework”, do you believe the human+AGI government could actually control ASIs?)
Thanks for clarifying. It’s not intuitive to me why that would make it more likely, and I can’t find anything else in this comment about that.
I see. That does help me understand the motive for ‘control’ research more.