Where my thinking is different, is that I don’t see an AI being significantly more intelligent than ourselves and cannot override its initial conditions (the human value alignments and safety measures that we build in). At the heart of it “superinteligent” and “controlled by humaity” seem contradictory.
That’s why I originally mentioned “the long term”. We can design how we want at this stage, but when eventually AI can bootstrap itself, the initial blueprint is irrelevant.
Where my thinking is different, is that I don’t see an AI being significantly more intelligent than ourselves and cannot override its initial conditions (the human value alignments and safety measures that we build in). At the heart of it “superinteligent” and “controlled by humaity” seem contradictory.
That’s why I originally mentioned “the long term”. We can design how we want at this stage, but when eventually AI can bootstrap itself, the initial blueprint is irrelevant.