Even an FAI has two goals (Friendliness and increasing its intelligence) which may come into conflict.
No, just Friendliness. Increasing intelligence has no weight whatsoever as a terminal goal. Of course, an AI that did not increase its intelligence to a level which it could do anything practical to aid me (or whatever the AI is Friendly to) is trivially not Friendly a posteriori.
No, just Friendliness. Increasing intelligence has no weight whatsoever as a terminal goal. Of course, an AI that did not increase its intelligence to a level which it could do anything practical to aid me (or whatever the AI is Friendly to) is trivially not Friendly a posteriori.
That leads to an interesting question—how would an FAI decide how much intelligence is enough?
I don’t know. It’s supposed to be the smart one, not me. ;)
I’m hoping it goes something like:
Predict the expected outcome of choosing to self improve some more.
Predict the expected outcome of choosing not to self improve some more.
Do the one that gives the best probability distribution of expected results.