I think it’s not quite as clear as needing to shut down all other AGI projects or we’re doomed; a small number of AGIs under control of different humans might be stable with good communication and agreements, at least until someone malevolent or foolish enough gets involved.
Realistically, in order to have a reasonable degree of certainty that this state can be maintained for more than a trivial amount of time, this would, at the very least, require a hard ban on open-source AI, as well as international agreements to strictly enforce transparency and compute restrictions, with the direct use of force if need be, especially if governments get much more involved in AI in the near-term future (which I expect will happen).
I do pretty much agree. All laws and international agreements are ultimately enforced by the use of force if need be, so that’s not saying anything new. It probably does need to be a hard ban on open-source AI at some point, but that’s well in the future, and I think the discussion will look very different once we have clearly parahuman AGI.
This is all going to be a tough pill to swallow. I think it’s going to be almost necessary that any government that enacts these rules will also have to assure everyone, and then follow through at least decently well with spreading the benefits of real AGI as broadly as possible. I see some hope in that becoming a necessity. We might get some oversight boards that could at least think clearly and apply some influence toward sanity.
Realistically, in order to have a reasonable degree of certainty that this state can be maintained for more than a trivial amount of time, this would, at the very least, require a hard ban on open-source AI, as well as international agreements to strictly enforce transparency and compute restrictions, with the direct use of force if need be, especially if governments get much more involved in AI in the near-term future (which I expect will happen).
Do you agree with this, as a baseline?
I do pretty much agree. All laws and international agreements are ultimately enforced by the use of force if need be, so that’s not saying anything new. It probably does need to be a hard ban on open-source AI at some point, but that’s well in the future, and I think the discussion will look very different once we have clearly parahuman AGI.
This is all going to be a tough pill to swallow. I think it’s going to be almost necessary that any government that enacts these rules will also have to assure everyone, and then follow through at least decently well with spreading the benefits of real AGI as broadly as possible. I see some hope in that becoming a necessity. We might get some oversight boards that could at least think clearly and apply some influence toward sanity.