Humans tend towards being adaptation-executers rather than utility-maximizers. It does make them less dangerous, in that it makes them less intelligent. If you programmed a self-modifying AI like that, it would still be at least as dangerous as a human who is capable of programming an AI. There’s also the simple fact that you can’t tell before-hand if it’s leaning too far on the utility-miximization side.
I have a feeling that in this context “intelligent” is defined as “maximizing utility”.
Pretty much.
If you just want to create a virtuous AI for some sort of deontological reason, then it being less intelligent isn’t a problem. If you want to get things done, then it is. The AI being subject to dutch book betting only helps you insomuch as the AI’s goals differ from yours and you don’t want it to be successful.
How well, do you think, this logic works for humans?
Humans tend towards being adaptation-executers rather than utility-maximizers. It does make them less dangerous, in that it makes them less intelligent. If you programmed a self-modifying AI like that, it would still be at least as dangerous as a human who is capable of programming an AI. There’s also the simple fact that you can’t tell before-hand if it’s leaning too far on the utility-miximization side.
Isn’t that circular reasoning? I have a feeling that in this context “intelligent” is defined as “maximizing utility”.
And what is an “adaptation-executer”?
Pretty much.
If you just want to create a virtuous AI for some sort of deontological reason, then it being less intelligent isn’t a problem. If you want to get things done, then it is. The AI being subject to dutch book betting only helps you insomuch as the AI’s goals differ from yours and you don’t want it to be successful.
See Adaptation-Executors, not Fitness-Maximizers.