Understood, and the bottom line is that the distinction between “terminal” and “instrumental” goals is actually pretty artificial, so if the problem with “maximize friendliness” is supposed to apply ONLY if it is terminal, it is a trivial fix to rewrite the actual terminal goals to make that one become instrumental.
What would you choose as a replacement terminal goal, or would you not use one?
Well, I guess you would write the terminal goal as quite a long statement, which would summarize the things involved in friendliness, but also include language about not going to extremes, laissez-faire, and so on. It would be vague and generous. And as part of the instrumental goal there would be a stipulation that the friendliness instrumental goal should trump all other instrumentals.
I’m having a bit of a problem answering because there are peripheral assumptions about how such an AI would be made to function, which I don’t want to accidentally buy into, because I don’t think goals expressed in language statements work anyway. So I am treading on eggshells here.
A simpler solution would simply be to scrap the idea of exceptional status for the terminal goal, and instead include massive contextual constraints as your guard against drift.
Well, I guess you would write the terminal goal as quite a long statement, which would summarize the things involved in friendliness, but also include language about not going to extremes, laissez-faire, and so on. It would be vague and generous.
That gets close to “do it right”
And as part of the instrumental goal there would be a stipulation that the friendliness instrumental goal should trump all other instrumentals.
Which is an open doorway to an AI that kills everyone because of miscoded friendliness,
If you want safety features, and you should, you would need them to override the ostensible purpose of the machine....they would be pointless otherwise....even the humble off switch works that way.
A simpler solution would simply be to scrap the idea of exceptional status for the terminal goal, and instead include massive contextual constraints as your guard against drift.
Arguably, those constraint would be a kind of negative goal.
What would you choose as a replacement terminal goal, or would you not use one?
Well, I guess you would write the terminal goal as quite a long statement, which would summarize the things involved in friendliness, but also include language about not going to extremes, laissez-faire, and so on. It would be vague and generous. And as part of the instrumental goal there would be a stipulation that the friendliness instrumental goal should trump all other instrumentals.
I’m having a bit of a problem answering because there are peripheral assumptions about how such an AI would be made to function, which I don’t want to accidentally buy into, because I don’t think goals expressed in language statements work anyway. So I am treading on eggshells here.
A simpler solution would simply be to scrap the idea of exceptional status for the terminal goal, and instead include massive contextual constraints as your guard against drift.
That gets close to “do it right”
Which is an open doorway to an AI that kills everyone because of miscoded friendliness,
If you want safety features, and you should, you would need them to override the ostensible purpose of the machine....they would be pointless otherwise....even the humble off switch works that way.
Arguably, those constraint would be a kind of negative goal.