Say we find an algorithm for producing progressively more accurate beliefs about itself and the world. This algorithm may be long and complicated—perhaps augmented by rules-of-thumb whenever the evidence available to it says these rules make better predictions. (E.g, “nine times out of ten the Enterprise is not destroyed.”) Combine this with an arbitrary goal and we have the making of a seed AI.
Seems like this could straightforwardly improve its ability to predict humans without changing its goal, which may be ‘maximize pleasure’ or ‘maximize X’. Why would it need to change its goal?
Say we find an algorithm for producing progressively more accurate beliefs about itself and the world. This algorithm may be long and complicated—perhaps augmented by rules-of-thumb whenever the evidence available to it says these rules make better predictions. (E.g, “nine times out of ten the Enterprise is not destroyed.”) Combine this with an arbitrary goal and we have the making of a seed AI.
Seems like this could straightforwardly improve its ability to predict humans without changing its goal, which may be ‘maximize pleasure’ or ‘maximize X’. Why would it need to change its goal?
If you deny the possibility of the above algorithm, then before giving any habitual response please remember what humanity knows about clinical vs. actuarial judgment. What lesson do you take from this?