I’m still reading your Sleeping Beauty posts, so I can’t properly respond to all your points yet. I’ll say though that I don’t think the usefulness or validity of the 0P/1P idea hinges on whether it helps with anthropics or Sleeping Beauty (note that I marked the Sleeping Beauty idea as speculation).
If they are not, then saying the phrase “1st person perspective” doesn’t suddenly allow us to use it.
This is frustrating because I’m trying hard here to specify exactly what I mean by the stuff I call “1st Person”. It’s a different interpretation of classical logic. The different interpretation refers to the use of sets of experiences vs the use of sets of worlds in the semantics. Within a particular interpretation, you can lawfully use all the same logic, math, probability, etc… because you’re just switching out which set you’re using for the semantics. What makes the interpretations different practically comes from wiring them up differently in the robot—is it reasoning about its world model or about its sensor values? It sounds like you think the 1P interpretation is superfluous, is that right?
Until then we are talking about the truth of statements “Red light was observed” and “Red light was not observed”.
Rephrasing it this way doesn’t change the fact that the observer has not yet been formally specified.
And if our mathematical model doesn’t track any other information, then for the sake of this mathematical model all the robots that observe red are the same entity. The whole point of math is that it’s true not just for one specific person but for everyone satisfying the conditions. That’s what makes it useful.
I agree that that is an important and useful aspect of what I would call 0P-mathematics. But I think it’s also useful to be able to build a robot that also has a mode of reasoning where it can reason about its sensor values in a straightforward way.
I’ll say though that I don’t think the usefulness or validity of the 0P/1P idea hinges on whether it helps with anthropics or Sleeping Beauty (note that I marked the Sleeping Beauty idea as speculation).
I agree. Or I’d even say that the usefulness and validity of the 0P/1P idea is reversely correlated with their applications to “anthropic reasoning”.
This is frustrating because I’m trying hard here to specify exactly what I mean by the stuff I call “1st Person”
Yes, I see that and I’m sorry. This kind of warning isn’t aimed at you in particular, it’s a result of my personal pain how people in general tend to misuse such ideas.
What makes the interpretations different practically comes from wiring them up differently in the robot—is it reasoning about its world model or about its sensor values? It sounds like you think the 1P interpretation is superfluous, is that right?
I’m not sure. It seems that one of them has to be reducible to the other, though probably in a opposite direction. Isn’t having a world model also a type of experience?
Like, consider two events: “one particular robot observes red” and “any robot observes red”. It seems that the first one is 1st person perspective, while the second is 0th person perspective in your terms. When a robot observes red with its own sensor it concludes that it in particular has observed red and deduces that it means that any robot has observed red. Observation leads to an update of a world model. But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I guess, I feel that the 0P, 1P distinction doesn’t really carve math by its joints. But I’ll have to think more about it.
Isn’t having a world model also a type of experience?
It is if the robot has introspective abilities, which is not necessarily the case. But yes, it is generally possible to convert 0P statements to 1P statements and vice-versa. My claim is essentially that this is not an isomorphism.
But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
The 1P semantics is a framework that can be used to design and reason about agents. Someone who thought of “you” as referring to something with a 1P perspective would want to think of it that way for those robots, but it wouldn’t be as helpful for the robots themselves to be designed this way if they worked like that.
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I think this is wrong, and that there is a wholly 0P probability theory and a wholly 1P probability theory. Agents can have different 0P probabilities because they don’t necessarily have the same priors, models, or seen the same evidence (yes seeing evidence would be a 1P event, but this can (imperfectly) be converted into a 0P statement—which would essentially be adding a new axiom to the 0P theory).
I’m still reading your Sleeping Beauty posts, so I can’t properly respond to all your points yet. I’ll say though that I don’t think the usefulness or validity of the 0P/1P idea hinges on whether it helps with anthropics or Sleeping Beauty (note that I marked the Sleeping Beauty idea as speculation).
This is frustrating because I’m trying hard here to specify exactly what I mean by the stuff I call “1st Person”. It’s a different interpretation of classical logic. The different interpretation refers to the use of sets of experiences vs the use of sets of worlds in the semantics. Within a particular interpretation, you can lawfully use all the same logic, math, probability, etc… because you’re just switching out which set you’re using for the semantics. What makes the interpretations different practically comes from wiring them up differently in the robot—is it reasoning about its world model or about its sensor values? It sounds like you think the 1P interpretation is superfluous, is that right?
Rephrasing it this way doesn’t change the fact that the observer has not yet been formally specified.
I agree that that is an important and useful aspect of what I would call 0P-mathematics. But I think it’s also useful to be able to build a robot that also has a mode of reasoning where it can reason about its sensor values in a straightforward way.
I agree. Or I’d even say that the usefulness and validity of the 0P/1P idea is reversely correlated with their applications to “anthropic reasoning”.
Yes, I see that and I’m sorry. This kind of warning isn’t aimed at you in particular, it’s a result of my personal pain how people in general tend to misuse such ideas.
I’m not sure. It seems that one of them has to be reducible to the other, though probably in a opposite direction. Isn’t having a world model also a type of experience?
Like, consider two events: “one particular robot observes red” and “any robot observes red”. It seems that the first one is 1st person perspective, while the second is 0th person perspective in your terms. When a robot observes red with its own sensor it concludes that it in particular has observed red and deduces that it means that any robot has observed red. Observation leads to an update of a world model. But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I guess, I feel that the 0P, 1P distinction doesn’t really carve math by its joints. But I’ll have to think more about it.
It is if the robot has introspective abilities, which is not necessarily the case. But yes, it is generally possible to convert 0P statements to 1P statements and vice-versa. My claim is essentially that this is not an isomorphism.
The 1P semantics is a framework that can be used to design and reason about agents. Someone who thought of “you” as referring to something with a 1P perspective would want to think of it that way for those robots, but it wouldn’t be as helpful for the robots themselves to be designed this way if they worked like that.
I think this is wrong, and that there is a wholly 0P probability theory and a wholly 1P probability theory. Agents can have different 0P probabilities because they don’t necessarily have the same priors, models, or seen the same evidence (yes seeing evidence would be a 1P event, but this can (imperfectly) be converted into a 0P statement—which would essentially be adding a new axiom to the 0P theory).