and it’s not clear that a sense of identity prevents the creation of subagents in the first place
It doesn’t. Humans do create sub-agents all the time to do their bidding. No I do not mean children. I mean sending out other people do errands. Yes, this is imperfect, but the AIs sub agents wouldn’t be perfect either. They may fail. In particular any sub-agent may fail and any restriction that calls to fail never (including sub-agent failure) is bound to cause malfunction is the first place.
There’s some informal suggestions (which I don’t think much of, so I didn’t really go into deep analysis) that use a sense of identity as the basis of controlling subagents. I didn’t want to go into the weeds of that in this post.
Yes. Some notion of identity is needed in any case for the AI. it has to encompass its executive functions as least. Identity distinguishes the AI from what is not the AI. I see no reason why this couldn’t include sub-agents. It is more a question of where the line is drawn not if. I’m looking forward to a future post of yours on identity.
It doesn’t. Humans do create sub-agents all the time to do their bidding. No I do not mean children. I mean sending out other people do errands. Yes, this is imperfect, but the AIs sub agents wouldn’t be perfect either. They may fail. In particular any sub-agent may fail and any restriction that calls to fail never (including sub-agent failure) is bound to cause malfunction is the first place.
There’s some informal suggestions (which I don’t think much of, so I didn’t really go into deep analysis) that use a sense of identity as the basis of controlling subagents. I didn’t want to go into the weeds of that in this post.
Yes. Some notion of identity is needed in any case for the AI. it has to encompass its executive functions as least. Identity distinguishes the AI from what is not the AI. I see no reason why this couldn’t include sub-agents. It is more a question of where the line is drawn not if. I’m looking forward to a future post of yours on identity.