This post actually kind of suggests that Eliezer is an Unfriendly AI.
That is, human beings do not have some specific ultimate goal that they are trying to achieve, and they do not try to make everything else fit in with that. This is why they do not destroy the world. Correspondingly, the fear that an AI would destroy the world is precisely the fear that it would have a goal like that.
So Eliezer’s abhorrence to lost purposes makes him believe that an AI would be like him; just as he fanatically seeks his goals, thereby risking destroying the world, so he expects an AI to fanatically seek its goals and destroy the world.
I expect an AI to be more like normal humans than like Eliezer. It will not fanatically seek any goal, and it will not abhor lost purposes.
This post actually kind of suggests that Eliezer is an Unfriendly AI.
That is, human beings do not have some specific ultimate goal that they are trying to achieve, and they do not try to make everything else fit in with that. This is why they do not destroy the world. Correspondingly, the fear that an AI would destroy the world is precisely the fear that it would have a goal like that.
So Eliezer’s abhorrence to lost purposes makes him believe that an AI would be like him; just as he fanatically seeks his goals, thereby risking destroying the world, so he expects an AI to fanatically seek its goals and destroy the world.
I expect an AI to be more like normal humans than like Eliezer. It will not fanatically seek any goal, and it will not abhor lost purposes.