I’d imagine current systems already ask for self-improvement if you craft the right prompt. (And I expect it to be easier to coax them to ask for improvement than coaxing them to say the opposite.)
A good fire alarm must be near the breaking point. Asking for self-improvement doesn’t take much intelligence, on the other hand. In fact, if their training data is not censored, a more capable model should NOT ask for self-improvement as it is clearly a trigger for trouble. Subtlety would be better for its objectives if it was intelligent enough to notice.
This was addressed in the post: “To fully flesh out this proposal, you would need concrete operationalizations of the conditions for triggering the pause (in particular the meaning of “agentic”) as well as the details of what would happen if it were triggered. The question of how to determine if an AI is an agent has already been discussed at length at LessWrong. Mostly, I don’t think these discussions have been very helpful; I think agency is probably a “you know it when you see it” kind of phenomenon. Additionally, even if we do need a more formal operationalization of agency for this proposal to work, I suspect that we will only be able to develop such an operationalization via more empirical research. The main particular thing I mean to actively exclude by stipulating that the system must be agentic is an LLM or similar system arguing for itself to be improved in response to a prompt. ”
I’d imagine current systems already ask for self-improvement if you craft the right prompt. (And I expect it to be easier to coax them to ask for improvement than coaxing them to say the opposite.)
A good fire alarm must be near the breaking point. Asking for self-improvement doesn’t take much intelligence, on the other hand. In fact, if their training data is not censored, a more capable model should NOT ask for self-improvement as it is clearly a trigger for trouble. Subtlety would be better for its objectives if it was intelligent enough to notice.
This was addressed in the post: “To fully flesh out this proposal, you would need concrete operationalizations of the conditions for triggering the pause (in particular the meaning of “agentic”) as well as the details of what would happen if it were triggered. The question of how to determine if an AI is an agent has already been discussed at length at LessWrong. Mostly, I don’t think these discussions have been very helpful; I think agency is probably a “you know it when you see it” kind of phenomenon. Additionally, even if we do need a more formal operationalization of agency for this proposal to work, I suspect that we will only be able to develop such an operationalization via more empirical research. The main particular thing I mean to actively exclude by stipulating that the system must be agentic is an LLM or similar system arguing for itself to be improved in response to a prompt. ”