Why does the fact that a superintelligence needs to solve the alignment problem for its own sake (to safely build its own successors) mean that humans building other superintelligences wouldn’t be a problem for it? It’s possible to have more than one problem at a time.
It’s possible, but I think it would require a modified version of the “low ceiling conjecture” to be true.
The standard “low ceiling conjecture” says that human-level intelligence is the hard (or soft) limit, and therefore it will be impossible (or would take a very long period of time) to move from human-level AI to superintelligence. I think most of us tend not to believe that.
A modified version would keep the hard (or soft) limit, but would raise it slightly, so that rapid transition to superintelligence is possible, but the resulting superintelligence can’t run away fast in terms of capabilities (no near-term “intelligence explosion”). If one believes this modified version of the “low ceiling conjecture”, then subsequent AIs produced by humanity might indeed be relevant.
Why does the fact that a superintelligence needs to solve the alignment problem for its own sake (to safely build its own successors) mean that humans building other superintelligences wouldn’t be a problem for it? It’s possible to have more than one problem at a time.
It’s possible, but I think it would require a modified version of the “low ceiling conjecture” to be true.
The standard “low ceiling conjecture” says that human-level intelligence is the hard (or soft) limit, and therefore it will be impossible (or would take a very long period of time) to move from human-level AI to superintelligence. I think most of us tend not to believe that.
A modified version would keep the hard (or soft) limit, but would raise it slightly, so that rapid transition to superintelligence is possible, but the resulting superintelligence can’t run away fast in terms of capabilities (no near-term “intelligence explosion”). If one believes this modified version of the “low ceiling conjecture”, then subsequent AIs produced by humanity might indeed be relevant.