I intended to refer to understanding the concept of manipulation adequately to avoid it if the AGI “wanted” to.
As for understanding the concept of intent, I agree that “true” intent is very difficult to understand, particularly if it’s projected far into the future. That’s a huge problem for approaches like CEV. The virtue of the approach I’m suggesting is that it entirely bypasses that complexity (while introducing new problems). Instead of inferring “true” intent, the AGI just “wants” to do what the human principal tells it to do. The human gets to decide what their intent is. The machine just has to understand what the human meant by what they said- and the human can clarify that in a conversation. I’m thinking of this as do what I mean and check (DWIMAC) alignment. More on this in Instruction-following AGI is easier and more likely than value aligned AGI.
I intended to refer to understanding the concept of manipulation adequately to avoid it if the AGI “wanted” to.
As for understanding the concept of intent, I agree that “true” intent is very difficult to understand, particularly if it’s projected far into the future. That’s a huge problem for approaches like CEV. The virtue of the approach I’m suggesting is that it entirely bypasses that complexity (while introducing new problems). Instead of inferring “true” intent, the AGI just “wants” to do what the human principal tells it to do. The human gets to decide what their intent is. The machine just has to understand what the human meant by what they said- and the human can clarify that in a conversation. I’m thinking of this as do what I mean and check (DWIMAC) alignment. More on this in Instruction-following AGI is easier and more likely than value aligned AGI.
I’ll read your article.