One thing I am not clear about is whether you are saying that a tool AI spontaneously develops what appears like intentionality or not. It sure seems that that is what you are saying, initially with a human in the feedback loop, until the suggestion to “create an AI with these motivations” is implemented. If so, then why are you saying that “there’s some daylight between superintelligent tools and agents”?
One thing I am not clear about is whether you are saying that a tool AI spontaneously develops what appears like intentionality or not. It sure seems that that is what you are saying, initially with a human in the feedback loop, until the suggestion to “create an AI with these motivations” is implemented. If so, then why are you saying that “there’s some daylight between superintelligent tools and agents”?