“Building an AI that doesn’t game your specifications” is the actual “alignment question” we should be doing research on.
Ok, it sounds to me like you’re saying:
“When you train ML systems, they game your specifications because the training dynamics are too dumb to infer what you actually want. We just need One Weird Trick to get the training dynamics to Do What You Mean Not What You Say, and then it will all work out, and there’s not a demon that will create another obstacle given that you surmounted this one.”
That is, training processes are not neutral; there’s the bad training processes that we have now (or had before the recent positive developments) and eventually will be good training processes that create aligned-by-default systems.
Is this roughly right, or am I misunderstanding you?
Ok, it sounds to me like you’re saying:
“When you train ML systems, they game your specifications because the training dynamics are too dumb to infer what you actually want. We just need One Weird Trick to get the training dynamics to Do What You Mean Not What You Say, and then it will all work out, and there’s not a demon that will create another obstacle given that you surmounted this one.”
That is, training processes are not neutral; there’s the bad training processes that we have now (or had before the recent positive developments) and eventually will be good training processes that create aligned-by-default systems.
Is this roughly right, or am I misunderstanding you?