Strong upvote—well laid out, clear explanation of your position and reasoning, I learned things.
Overall I think the lines of thought all make sense, but they seem to me to hinge entirely on your assigning a low probability to AI takeover scenarios, which you point out you have not modeled. I mean this in the sense that power concentration risks, as described, are only meaningful in scenarios where the power resides with the humans that create the AI, rather than the AI. Relatedly, the only way power concentration risks are lower in the non-centralization branch is if multiple projects yield AGI before any of them become particularly powerful, whereas this post assumes China would not be able to catch up to the hypothetical unified US project. I see the graphs showing a longer US lead time in the latter scenario, but I do not know if I agree the effect would be large enough to matter.
In other words, if instead you believed AI takeover scenarios were likely, or that the gap from human to superhuman level were small, then it wouldn’t really matter how many projects there are that are close to AGI, only the quality of the one that gets there first. I don’t want whoever-is-in-charge-at-the-DOD to be in control of the ultimate fate of humanity forever. I don’t particularly want any private corporation to have that power either. I would, however, prefer that almost any human group be in such a position, than for humanity to unintentionally lose control of its future and be permanently disempowered or destroyed.
Of course, the terms AGI, human level, and superhuman level are abstractions and approximations anyway, I get that. I personally am not convinced there’s much difference between human and superhuman, and think that by the time we get robust-human-quality-thinking, any AI will already by sufficiently superhuman in other areas that we’ll be well past human level overall.
Strong upvote—well laid out, clear explanation of your position and reasoning, I learned things.
Overall I think the lines of thought all make sense, but they seem to me to hinge entirely on your assigning a low probability to AI takeover scenarios, which you point out you have not modeled. I mean this in the sense that power concentration risks, as described, are only meaningful in scenarios where the power resides with the humans that create the AI, rather than the AI. Relatedly, the only way power concentration risks are lower in the non-centralization branch is if multiple projects yield AGI before any of them become particularly powerful, whereas this post assumes China would not be able to catch up to the hypothetical unified US project. I see the graphs showing a longer US lead time in the latter scenario, but I do not know if I agree the effect would be large enough to matter.
In other words, if instead you believed AI takeover scenarios were likely, or that the gap from human to superhuman level were small, then it wouldn’t really matter how many projects there are that are close to AGI, only the quality of the one that gets there first. I don’t want whoever-is-in-charge-at-the-DOD to be in control of the ultimate fate of humanity forever. I don’t particularly want any private corporation to have that power either. I would, however, prefer that almost any human group be in such a position, than for humanity to unintentionally lose control of its future and be permanently disempowered or destroyed.
Of course, the terms AGI, human level, and superhuman level are abstractions and approximations anyway, I get that. I personally am not convinced there’s much difference between human and superhuman, and think that by the time we get robust-human-quality-thinking, any AI will already by sufficiently superhuman in other areas that we’ll be well past human level overall.