I might give the essence of the assumptions as something like: you can’t beat superintelligence; intelligence is independent of value; and human survival and flourishing require specific complex values that we don’t know how to specify.
But further pitfalls reveal themselves later, e.g. you may think you have specified human-friendly values correctly, but the AI may then interpret the specification in an unexpected way.
What is clearer than doom, is that creation of superintelligent AI is an enormous gamble, because it means irreversibly handing control of the world to something non-human. Eliezer’s position is that you shouldn’t do that unless you absolutely know what you’re doing. The position of the would-be architects of superintelligent AI is that hopefully they can figure out everything needed for a happy ending, in the course of their adventure.
One further point I would emphasize, in the light of the last few years of experience with generative AI, is the unpredictability of the output of these powerful systems. You can type in a prompt, and get back a text, an image, or a video, which is like nothing you anticipated, and sometimes it is very definitely not what you want. “Generative superintelligence” has the potential to produce a surprising and possibly “wrong” output that will transform the world and be impossible to undo.
David Chalmers asked for one last year, but there isn’t.
I might give the essence of the assumptions as something like: you can’t beat superintelligence; intelligence is independent of value; and human survival and flourishing require specific complex values that we don’t know how to specify.
But further pitfalls reveal themselves later, e.g. you may think you have specified human-friendly values correctly, but the AI may then interpret the specification in an unexpected way.
What is clearer than doom, is that creation of superintelligent AI is an enormous gamble, because it means irreversibly handing control of the world to something non-human. Eliezer’s position is that you shouldn’t do that unless you absolutely know what you’re doing. The position of the would-be architects of superintelligent AI is that hopefully they can figure out everything needed for a happy ending, in the course of their adventure.
One further point I would emphasize, in the light of the last few years of experience with generative AI, is the unpredictability of the output of these powerful systems. You can type in a prompt, and get back a text, an image, or a video, which is like nothing you anticipated, and sometimes it is very definitely not what you want. “Generative superintelligence” has the potential to produce a surprising and possibly “wrong” output that will transform the world and be impossible to undo.