What I have noticed is that while there are cogent overviews of AI safety that don’t come to the extreme conclusion that we all going to be killed by AI with high probability....and there are articles that do come to that conclusion without being at all rigorous or cogent....there aren’t any that do both. From that I conclude there aren’t any good reasons to believe in extreme AI doom scenarios, and you should disbelieve them. Others use more complicated reasoning, like “Yudkowsky is too intelligent to communicate his ideas to lesser mortals, but household believe him anyway”.
@MitchellPorter supplies us with some examples of gappy arguments.
human survival and flourishing require specific complex values that we don’t know how to specify
There ’s no evidence that “human values” are even a coherent entity , and no reason to believe that any
AI of any architecture would need them.
But further pitfalls reveal themselves later, e.g. you may think you have specified human-friendly values correctly, but the AI may then interpret the specification in an unexpected way.
What is clearer than doom, is that creation of superintelligent AI is an enormous gamble, because it means irreversibly handing control of the world
Hang on a minute. Where does control of the come from? Do we give it to the AI? Does it take it?
to something non-human. Eliezer’s position is that you shouldn’t do that unless you absolutely know what you’re doing. The position of the would-be architects of superintelligent AI is that hopefully they can figure out everything needed for a happy ending, in the course of their adventure.
One further point I would emphasize, in the light of the last few years of experience with generative AI, is the unpredictability of the output of these powerful systems. You can type in a prompt, and get back a text, an image, or a video, which is like nothing you anticipated, and sometimes it is very definitely not what you want. “Generative superintelligence” has the potential to produce a surprising and possibly “wrong” output that will transform the world and be impossible to undo.
Current generative AI has no ability to directly affect anything. Where would that come from?
What I have noticed is that while there are cogent overviews of AI safety that don’t come to the extreme conclusion that we all going to be killed by AI with high probability....and there are articles that do come to that conclusion without being at all rigorous or cogent....there aren’t any that do both. From that I conclude there aren’t any good reasons to believe in extreme AI doom scenarios, and you should disbelieve them. Others use more complicated reasoning, like “Yudkowsky is too intelligent to communicate his ideas to lesser mortals, but household believe him anyway”.
(See @DPiepgrass saying something similar and of course getting downvoted).
@MitchellPorter supplies us with some examples of gappy arguments.
There ’s no evidence that “human values” are even a coherent entity , and no reason to believe that any AI of any architecture would need them.
But further pitfalls reveal themselves later, e.g. you may think you have specified human-friendly values correctly, but the AI may then interpret the specification in an unexpected way.
Hang on a minute. Where does control of the come from? Do we give it to the AI? Does it take it?
to something non-human. Eliezer’s position is that you shouldn’t do that unless you absolutely know what you’re doing. The position of the would-be architects of superintelligent AI is that hopefully they can figure out everything needed for a happy ending, in the course of their adventure.
Current generative AI has no ability to directly affect anything. Where would that come from?