I don’t think the risk ordering is obvious at all, especially not between #2 and #3, and especially not if you also took into account tractability concerns and risks separate from extinction (e.g. stable totalitarianism, s-risks). Even if you thought coordinating with China might be worth it, I think it should be at least somewhat obvious why the US government [/ and its allies] might be very uncomfortable building a coalition with, say, North Korea or Russia. Even between #1 and #2, the probable increase in risks of centralization might make it not worth it, at least in some worlds, depending on how optimistic one might be about e.g. alignment or offense-defense balance from misuse of models with dangerous capabilities.
I also don’t think it’s obvious alternative paradigms would necessarily be both safer and tractable enough, even on 10-year timelines, especially if you don’t use AI automation (using the current paradigm, probably) to push those forward.
the probable increase in risks of centralization might make it not worth it
Can you say more about why the risk of centralization differs meaningfully between the three worlds?
IMO if you assume that (a) an intelligence explosion occurs at some point, (b) the leading actor uses the intelligence explosion to produce a superintelligence that provides a decisive strategic advantage, and (c) the superintelligence is aligned/controlled...
Then you are very likely (in the absence of coordination) to result in centralization no matter what. It’s just a matter of whether OpenAI/Microsoft (scenario #1), the USG and allies (scenario #2), or a broader international coalition (weighted heavily toward the USG and China) are the ones wielding the superintelligence.
(If anything, it seems like the “international coalition” approach seems less likely to lead to centralization than the other two approaches, since you’re more likely to get post-AGI coordination.)
especially if you don’t use AI automation (using the current paradigm, probably) to push those forward.
In my vision, the national or international project would be investing into “superalignment”-style approaches, they would just (hopefully) have enough time/resources to be investing into other approaches as well.
I typically assume we don’t get “infinite time”– i.e., even the international coalition is racing against “the clock” (e.g., the amount of time it takes for a rogue actor to develop ASI in a way that can’t be prevented, or the amount of time we have until a separate existential catastrophe occurs.) So I think it would be unwise for the international coalition to completely abandon DL/superalignemnt, even if one of the big hopes is that a safer paradigm would be discovered in time.
IMO if you assume that (a) an intelligence explosion occurs at some point, (b) the leading actor uses the intelligence explosion to produce a superintelligence that provides a decisive strategic advantage, and (c) the superintelligence is aligned/controlled...
I don’t think this is obvious, stably-multipolar worlds seem at least plausible to me.
@Bodgan, Can you spell out a vision for a stably multipolar world with the above assumptions satisfied?
IMO assumption B is doing a lot of the work— you might argue that the IE will not give anyone a DSA, in which case things get more complicated. I do see some plausible stories in which this could happen but they seem pretty unlikely.
@Ryan, thanks for linking to those. Lmk if there are particular points you think are most relevant (meta: I think in general I find discourse more productive when it’s like “hey here’s a claim, also read more here” as opposed to links. Ofc that puts more communication burden on you though, so feel free to just take the links approach.)
(Yeah, I was just literally linking to things people might find relevant to read without making any particular claim. I think this is often slightly helpful, so I do it. Edit: when I do this, I should probably include a disclaimer like “Linking for relevance, not making any specific claim”.)
Yup, I was thinking about worlds in which there is no obvious DSA, or where the parties involved are risk averse enough (perhaps e.g. for reasons like in this talk)
My expectation is that DSI can (and will) be achieved before ASI. In fact, I expect ASI to be about as useful as a bomb which has a minimum effect size of destroying the entire solar system if deployed. In other words, useful only for Mutually Assured Destruction.
DSI only requires a nuclear-armed state actor to have an effective global missile defense system. Whichever nuclear-armed state actor gets that without any other group having that can effectively demand the surrender and disarmament of all other nations. Including confiscating their compute resources.
Do you think missile defense is so difficult that only ASI can manage it? I don’t. That seems like a technical discussion which would need more details to hash out. I’m pretty sure an explicitly designed tool AI and a large drone and satellite fleet could accomplish that.
I don’t think the risk ordering is obvious at all, especially not between #2 and #3, and especially not if you also took into account tractability concerns and risks separate from extinction (e.g. stable totalitarianism, s-risks). Even if you thought coordinating with China might be worth it, I think it should be at least somewhat obvious why the US government [/ and its allies] might be very uncomfortable building a coalition with, say, North Korea or Russia. Even between #1 and #2, the probable increase in risks of centralization might make it not worth it, at least in some worlds, depending on how optimistic one might be about e.g. alignment or offense-defense balance from misuse of models with dangerous capabilities.
I also don’t think it’s obvious alternative paradigms would necessarily be both safer and tractable enough, even on 10-year timelines, especially if you don’t use AI automation (using the current paradigm, probably) to push those forward.
Can you say more about why the risk of centralization differs meaningfully between the three worlds?
IMO if you assume that (a) an intelligence explosion occurs at some point, (b) the leading actor uses the intelligence explosion to produce a superintelligence that provides a decisive strategic advantage, and (c) the superintelligence is aligned/controlled...
Then you are very likely (in the absence of coordination) to result in centralization no matter what. It’s just a matter of whether OpenAI/Microsoft (scenario #1), the USG and allies (scenario #2), or a broader international coalition (weighted heavily toward the USG and China) are the ones wielding the superintelligence.
(If anything, it seems like the “international coalition” approach seems less likely to lead to centralization than the other two approaches, since you’re more likely to get post-AGI coordination.)
In my vision, the national or international project would be investing into “superalignment”-style approaches, they would just (hopefully) have enough time/resources to be investing into other approaches as well.
I typically assume we don’t get “infinite time”– i.e., even the international coalition is racing against “the clock” (e.g., the amount of time it takes for a rogue actor to develop ASI in a way that can’t be prevented, or the amount of time we have until a separate existential catastrophe occurs.) So I think it would be unwise for the international coalition to completely abandon DL/superalignemnt, even if one of the big hopes is that a safer paradigm would be discovered in time.
I don’t think this is obvious, stably-multipolar worlds seem at least plausible to me.
See also here and here.
@Bodgan, Can you spell out a vision for a stably multipolar world with the above assumptions satisfied?
IMO assumption B is doing a lot of the work— you might argue that the IE will not give anyone a DSA, in which case things get more complicated. I do see some plausible stories in which this could happen but they seem pretty unlikely.
@Ryan, thanks for linking to those. Lmk if there are particular points you think are most relevant (meta: I think in general I find discourse more productive when it’s like “hey here’s a claim, also read more here” as opposed to links. Ofc that puts more communication burden on you though, so feel free to just take the links approach.)
(Yeah, I was just literally linking to things people might find relevant to read without making any particular claim. I think this is often slightly helpful, so I do it. Edit: when I do this, I should probably include a disclaimer like “Linking for relevance, not making any specific claim”.)
Yup, I was thinking about worlds in which there is no obvious DSA, or where the parties involved are risk averse enough (perhaps e.g. for reasons like in this talk)
My expectation is that DSI can (and will) be achieved before ASI. In fact, I expect ASI to be about as useful as a bomb which has a minimum effect size of destroying the entire solar system if deployed. In other words, useful only for Mutually Assured Destruction. DSI only requires a nuclear-armed state actor to have an effective global missile defense system. Whichever nuclear-armed state actor gets that without any other group having that can effectively demand the surrender and disarmament of all other nations. Including confiscating their compute resources. Do you think missile defense is so difficult that only ASI can manage it? I don’t. That seems like a technical discussion which would need more details to hash out. I’m pretty sure an explicitly designed tool AI and a large drone and satellite fleet could accomplish that.