The spectrum you’re describing is related, I think, to the spectrum that appears in the AIT definition of agency where there is dependence on the cost of computational resources. This means that the same system can appear agentic from a resource-scarce perspective but non-agentic from a resource-abundant perspective. The former then corresponds to the Vingean regime and the latter to the predictable regime. However, the framework does have a notion of prior and not just utility, so it is possible to ascribe beliefs to Vingean agents. I think it makes sense: the beliefs of another agent can predictably differ from your own beliefs if only because there is some evidence that you have seen but the other agent, to the best of your knowledge, have not[1].
You need to allow for the possibility that the other agent inferred this evidence from some pattern you are not aware of, but you should not be confident of this. For example even a an arbitrarily-intelligent AI that received zero external information should have a hard time inferring certain things about the world that we know.
The spectrum you’re describing is related, I think, to the spectrum that appears in the AIT definition of agency where there is dependence on the cost of computational resources. This means that the same system can appear agentic from a resource-scarce perspective but non-agentic from a resource-abundant perspective. The former then corresponds to the Vingean regime and the latter to the predictable regime. However, the framework does have a notion of prior and not just utility, so it is possible to ascribe beliefs to Vingean agents. I think it makes sense: the beliefs of another agent can predictably differ from your own beliefs if only because there is some evidence that you have seen but the other agent, to the best of your knowledge, have not[1].
You need to allow for the possibility that the other agent inferred this evidence from some pattern you are not aware of, but you should not be confident of this. For example even a an arbitrarily-intelligent AI that received zero external information should have a hard time inferring certain things about the world that we know.
Everyone above you looks like the tide and everyone below you looks like an ant?