To me the interesting question is: how did the AI acquire enough ontology and bridging to build a subagent whose goals are well-grounded? And grounded in what, so to speak? In the subagent’s observable data, or in a fully deterministic ontology where all the uncertainty has been packed into the parameters?
To me the interesting question is: how did the AI acquire enough ontology and bridging to build a subagent whose goals are well-grounded? And grounded in what, so to speak? In the subagent’s observable data, or in a fully deterministic ontology where all the uncertainty has been packed into the parameters?