I’ll say though that I don’t think the usefulness or validity of the 0P/1P idea hinges on whether it helps with anthropics or Sleeping Beauty (note that I marked the Sleeping Beauty idea as speculation).
I agree. Or I’d even say that the usefulness and validity of the 0P/1P idea is reversely correlated with their applications to “anthropic reasoning”.
This is frustrating because I’m trying hard here to specify exactly what I mean by the stuff I call “1st Person”
Yes, I see that and I’m sorry. This kind of warning isn’t aimed at you in particular, it’s a result of my personal pain how people in general tend to misuse such ideas.
What makes the interpretations different practically comes from wiring them up differently in the robot—is it reasoning about its world model or about its sensor values? It sounds like you think the 1P interpretation is superfluous, is that right?
I’m not sure. It seems that one of them has to be reducible to the other, though probably in a opposite direction. Isn’t having a world model also a type of experience?
Like, consider two events: “one particular robot observes red” and “any robot observes red”. It seems that the first one is 1st person perspective, while the second is 0th person perspective in your terms. When a robot observes red with its own sensor it concludes that it in particular has observed red and deduces that it means that any robot has observed red. Observation leads to an update of a world model. But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I guess, I feel that the 0P, 1P distinction doesn’t really carve math by its joints. But I’ll have to think more about it.
Isn’t having a world model also a type of experience?
It is if the robot has introspective abilities, which is not necessarily the case. But yes, it is generally possible to convert 0P statements to 1P statements and vice-versa. My claim is essentially that this is not an isomorphism.
But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
The 1P semantics is a framework that can be used to design and reason about agents. Someone who thought of “you” as referring to something with a 1P perspective would want to think of it that way for those robots, but it wouldn’t be as helpful for the robots themselves to be designed this way if they worked like that.
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I think this is wrong, and that there is a wholly 0P probability theory and a wholly 1P probability theory. Agents can have different 0P probabilities because they don’t necessarily have the same priors, models, or seen the same evidence (yes seeing evidence would be a 1P event, but this can (imperfectly) be converted into a 0P statement—which would essentially be adding a new axiom to the 0P theory).
I agree. Or I’d even say that the usefulness and validity of the 0P/1P idea is reversely correlated with their applications to “anthropic reasoning”.
Yes, I see that and I’m sorry. This kind of warning isn’t aimed at you in particular, it’s a result of my personal pain how people in general tend to misuse such ideas.
I’m not sure. It seems that one of them has to be reducible to the other, though probably in a opposite direction. Isn’t having a world model also a type of experience?
Like, consider two events: “one particular robot observes red” and “any robot observes red”. It seems that the first one is 1st person perspective, while the second is 0th person perspective in your terms. When a robot observes red with its own sensor it concludes that it in particular has observed red and deduces that it means that any robot has observed red. Observation leads to an update of a world model. But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I guess, I feel that the 0P, 1P distinction doesn’t really carve math by its joints. But I’ll have to think more about it.
It is if the robot has introspective abilities, which is not necessarily the case. But yes, it is generally possible to convert 0P statements to 1P statements and vice-versa. My claim is essentially that this is not an isomorphism.
The 1P semantics is a framework that can be used to design and reason about agents. Someone who thought of “you” as referring to something with a 1P perspective would want to think of it that way for those robots, but it wouldn’t be as helpful for the robots themselves to be designed this way if they worked like that.
I think this is wrong, and that there is a wholly 0P probability theory and a wholly 1P probability theory. Agents can have different 0P probabilities because they don’t necessarily have the same priors, models, or seen the same evidence (yes seeing evidence would be a 1P event, but this can (imperfectly) be converted into a 0P statement—which would essentially be adding a new axiom to the 0P theory).