Isn’t having a world model also a type of experience?
It is if the robot has introspective abilities, which is not necessarily the case. But yes, it is generally possible to convert 0P statements to 1P statements and vice-versa. My claim is essentially that this is not an isomorphism.
But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
The 1P semantics is a framework that can be used to design and reason about agents. Someone who thought of “you” as referring to something with a 1P perspective would want to think of it that way for those robots, but it wouldn’t be as helpful for the robots themselves to be designed this way if they worked like that.
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I think this is wrong, and that there is a wholly 0P probability theory and a wholly 1P probability theory. Agents can have different 0P probabilities because they don’t necessarily have the same priors, models, or seen the same evidence (yes seeing evidence would be a 1P event, but this can (imperfectly) be converted into a 0P statement—which would essentially be adding a new axiom to the 0P theory).
It is if the robot has introspective abilities, which is not necessarily the case. But yes, it is generally possible to convert 0P statements to 1P statements and vice-versa. My claim is essentially that this is not an isomorphism.
The 1P semantics is a framework that can be used to design and reason about agents. Someone who thought of “you” as referring to something with a 1P perspective would want to think of it that way for those robots, but it wouldn’t be as helpful for the robots themselves to be designed this way if they worked like that.
I think this is wrong, and that there is a wholly 0P probability theory and a wholly 1P probability theory. Agents can have different 0P probabilities because they don’t necessarily have the same priors, models, or seen the same evidence (yes seeing evidence would be a 1P event, but this can (imperfectly) be converted into a 0P statement—which would essentially be adding a new axiom to the 0P theory).