It can easily be argued that evolution did a good job, not a bad job, by not giving us a “primary directive.” The reason AI is dangerous is precisely because it might have such a directive; being an “optimizer” is precisely the reason that one fears that AI might destroy the world. So if anything, kingmaker is correct to think that since human beings are like this, it is at least theoretically possible that AI’s will be like this, and that they will not destroy the world for similar reasons.
If we had a simple primary directive, we would be fully satisfied by having a machine accomplish it for us, and it would be much easier to get a machine that would do it.
It can easily be argued that evolution did a good job, not a bad job, by not giving us a “primary directive.” The reason AI is dangerous is precisely because it might have such a directive; being an “optimizer” is precisely the reason that one fears that AI might destroy the world. So if anything, kingmaker is correct to think that since human beings are like this, it is at least theoretically possible that AI’s will be like this, and that they will not destroy the world for similar reasons.
If we had a simple primary directive, we would be fully satisfied by having a machine accomplish it for us, and it would be much easier to get a machine that would do it.