Yes it does require limiting the spread of AGI. I only referred to it briefly in the phrase ”...even if AGI does not proliferate widely”. I discuss it more in “do we all die anyway”. I think it’s not quite as clear as needing to shut down all other AGI projects or we’re doomed; a small number of AGIs under control of different humans might be stable with good communication and agreements, at least until someone malevolent or foolish enough gets involved. That’s why I’m using the term proliferation; I think the dynamics are somewhat similar to the nuclear standoff, where we’ve actually seen stability with a handful of actors.
I’m hoping the need to reduce proliferation will become apparent to anyone who sees the potential of real AGI and who thinks about international politics, including terrorism. I’m hoping the potential of AGI will be much more intuitively apparent to anyone having a conversation with something that’s smarter than them and just as agentic. We shall see.
Note that I did address your core point from the other comments you linked: human values aren’t well-defined, so you can’t align anything to them. I think aligning a superintelligence to anything in the neighborhood of common human preferences would be close enough to be pretty happy with, even if you can’t do something more clever and provide lots of freedom to future beings without letting them abuse it at others’ expense. Hopefully we can have a long reflection, or good answers seem obvious if we have a superhuman intelligence helping us think through it. I have some ideas, but that’s a whole different project than surviving the first AGIs and so getting to have that discussion and that choice.
I think without any progress on understanding human values we could still have a world thousands of times better than anything we’ve had, in the opinions of the vast majority of human beings who have or will live. That’s good enough for me, at least for now.
I think it’s not quite as clear as needing to shut down all other AGI projects or we’re doomed; a small number of AGIs under control of different humans might be stable with good communication and agreements, at least until someone malevolent or foolish enough gets involved.
Realistically, in order to have a reasonable degree of certainty that this state can be maintained for more than a trivial amount of time, this would, at the very least, require a hard ban on open-source AI, as well as international agreements to strictly enforce transparency and compute restrictions, with the direct use of force if need be, especially if governments get much more involved in AI in the near-term future (which I expect will happen).
I do pretty much agree. All laws and international agreements are ultimately enforced by the use of force if need be, so that’s not saying anything new. It probably does need to be a hard ban on open-source AI at some point, but that’s well in the future, and I think the discussion will look very different once we have clearly parahuman AGI.
This is all going to be a tough pill to swallow. I think it’s going to be almost necessary that any government that enacts these rules will also have to assure everyone, and then follow through at least decently well with spreading the benefits of real AGI as broadly as possible. I see some hope in that becoming a necessity. We might get some oversight boards that could at least think clearly and apply some influence toward sanity.
Yes it does require limiting the spread of AGI. I only referred to it briefly in the phrase ”...even if AGI does not proliferate widely”. I discuss it more in “do we all die anyway”. I think it’s not quite as clear as needing to shut down all other AGI projects or we’re doomed; a small number of AGIs under control of different humans might be stable with good communication and agreements, at least until someone malevolent or foolish enough gets involved. That’s why I’m using the term proliferation; I think the dynamics are somewhat similar to the nuclear standoff, where we’ve actually seen stability with a handful of actors.
I’m hoping the need to reduce proliferation will become apparent to anyone who sees the potential of real AGI and who thinks about international politics, including terrorism. I’m hoping the potential of AGI will be much more intuitively apparent to anyone having a conversation with something that’s smarter than them and just as agentic. We shall see.
Note that I did address your core point from the other comments you linked: human values aren’t well-defined, so you can’t align anything to them. I think aligning a superintelligence to anything in the neighborhood of common human preferences would be close enough to be pretty happy with, even if you can’t do something more clever and provide lots of freedom to future beings without letting them abuse it at others’ expense. Hopefully we can have a long reflection, or good answers seem obvious if we have a superhuman intelligence helping us think through it. I have some ideas, but that’s a whole different project than surviving the first AGIs and so getting to have that discussion and that choice.
I think without any progress on understanding human values we could still have a world thousands of times better than anything we’ve had, in the opinions of the vast majority of human beings who have or will live. That’s good enough for me, at least for now.
Realistically, in order to have a reasonable degree of certainty that this state can be maintained for more than a trivial amount of time, this would, at the very least, require a hard ban on open-source AI, as well as international agreements to strictly enforce transparency and compute restrictions, with the direct use of force if need be, especially if governments get much more involved in AI in the near-term future (which I expect will happen).
Do you agree with this, as a baseline?
I do pretty much agree. All laws and international agreements are ultimately enforced by the use of force if need be, so that’s not saying anything new. It probably does need to be a hard ban on open-source AI at some point, but that’s well in the future, and I think the discussion will look very different once we have clearly parahuman AGI.
This is all going to be a tough pill to swallow. I think it’s going to be almost necessary that any government that enacts these rules will also have to assure everyone, and then follow through at least decently well with spreading the benefits of real AGI as broadly as possible. I see some hope in that becoming a necessity. We might get some oversight boards that could at least think clearly and apply some influence toward sanity.