Frankly, I’m not sure whether the distinction between “worlds” and “experiences” is more useful or more harmful. There is definitely something that rings true about your post but people have been misinterpreting all that in a very silly ways for decades and it seems that you are ready to go in the same direction, considering your mentioning of anthropics.
Mathematically, there are mutually exclusive outcomes which can be combined into events. It doesn’t matter whether these outcomes represent worlds or possible experiences in one world or whatever else—as long as they are truly mutually exclusive we can lawfully use probability theory. If they are not, then saying the phrase “1st person perspective” doesn’t suddenly allow us to use it.
How do we give A the intuitive meaning of “my sensor sees red”?
We don’t, unless we can formally specify what “my” means. Until then we are talking about the truth of statements “Red light was observed” and “Red light was not observed”. And if our mathematical model doesn’t track any other information, then for the sake of this mathematical model all the robots that observe red are the same entity. The whole point of math is that it’s true not just for one specific person but for everyone satisfying the conditions. That’s what makes it useful.
Suppose I’m observing a dice roll and I wonder what is the probability that the result will be “4”. The mathematical model that tells me that it’s 1⁄6 also tells the same to you, or any other person. It tells the same fact about any other roll of any other dice with similar relevant properties.
From this, we get a nice potential explanation to the Sleeping Beauty paradox: 1⁄2 is the 0P-probability, and 1⁄3 is the 1P-probability. This could also explain why both intuitions are so strong.
I was worried that you would go there. There is only one lawful way to define probability in Sleeping Beauty problem. The crux of disagreement between thirders and halfers is whether this awakening should be modeled as random awakening between three equiprobable mutually exclusive outcomes: Heads&Monday, Tails&Monday and Tails&Tuesday. And there is one correct answer to it—no it should not. We can formally prove that if Tails&Monday awakening is always followed by Tails&Tuesday awakening, then they are not mutually exclusive.
I’m still reading your Sleeping Beauty posts, so I can’t properly respond to all your points yet. I’ll say though that I don’t think the usefulness or validity of the 0P/1P idea hinges on whether it helps with anthropics or Sleeping Beauty (note that I marked the Sleeping Beauty idea as speculation).
If they are not, then saying the phrase “1st person perspective” doesn’t suddenly allow us to use it.
This is frustrating because I’m trying hard here to specify exactly what I mean by the stuff I call “1st Person”. It’s a different interpretation of classical logic. The different interpretation refers to the use of sets of experiences vs the use of sets of worlds in the semantics. Within a particular interpretation, you can lawfully use all the same logic, math, probability, etc… because you’re just switching out which set you’re using for the semantics. What makes the interpretations different practically comes from wiring them up differently in the robot—is it reasoning about its world model or about its sensor values? It sounds like you think the 1P interpretation is superfluous, is that right?
Until then we are talking about the truth of statements “Red light was observed” and “Red light was not observed”.
Rephrasing it this way doesn’t change the fact that the observer has not yet been formally specified.
And if our mathematical model doesn’t track any other information, then for the sake of this mathematical model all the robots that observe red are the same entity. The whole point of math is that it’s true not just for one specific person but for everyone satisfying the conditions. That’s what makes it useful.
I agree that that is an important and useful aspect of what I would call 0P-mathematics. But I think it’s also useful to be able to build a robot that also has a mode of reasoning where it can reason about its sensor values in a straightforward way.
I’ll say though that I don’t think the usefulness or validity of the 0P/1P idea hinges on whether it helps with anthropics or Sleeping Beauty (note that I marked the Sleeping Beauty idea as speculation).
I agree. Or I’d even say that the usefulness and validity of the 0P/1P idea is reversely correlated with their applications to “anthropic reasoning”.
This is frustrating because I’m trying hard here to specify exactly what I mean by the stuff I call “1st Person”
Yes, I see that and I’m sorry. This kind of warning isn’t aimed at you in particular, it’s a result of my personal pain how people in general tend to misuse such ideas.
What makes the interpretations different practically comes from wiring them up differently in the robot—is it reasoning about its world model or about its sensor values? It sounds like you think the 1P interpretation is superfluous, is that right?
I’m not sure. It seems that one of them has to be reducible to the other, though probably in a opposite direction. Isn’t having a world model also a type of experience?
Like, consider two events: “one particular robot observes red” and “any robot observes red”. It seems that the first one is 1st person perspective, while the second is 0th person perspective in your terms. When a robot observes red with its own sensor it concludes that it in particular has observed red and deduces that it means that any robot has observed red. Observation leads to an update of a world model. But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I guess, I feel that the 0P, 1P distinction doesn’t really carve math by its joints. But I’ll have to think more about it.
Isn’t having a world model also a type of experience?
It is if the robot has introspective abilities, which is not necessarily the case. But yes, it is generally possible to convert 0P statements to 1P statements and vice-versa. My claim is essentially that this is not an isomorphism.
But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
The 1P semantics is a framework that can be used to design and reason about agents. Someone who thought of “you” as referring to something with a 1P perspective would want to think of it that way for those robots, but it wouldn’t be as helpful for the robots themselves to be designed this way if they worked like that.
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I think this is wrong, and that there is a wholly 0P probability theory and a wholly 1P probability theory. Agents can have different 0P probabilities because they don’t necessarily have the same priors, models, or seen the same evidence (yes seeing evidence would be a 1P event, but this can (imperfectly) be converted into a 0P statement—which would essentially be adding a new axiom to the 0P theory).
Frankly, I’m not sure whether the distinction between “worlds” and “experiences” is more useful or more harmful. There is definitely something that rings true about your post but people have been misinterpreting all that in a very silly ways for decades and it seems that you are ready to go in the same direction, considering your mentioning of anthropics.
Mathematically, there are mutually exclusive outcomes which can be combined into events. It doesn’t matter whether these outcomes represent worlds or possible experiences in one world or whatever else—as long as they are truly mutually exclusive we can lawfully use probability theory. If they are not, then saying the phrase “1st person perspective” doesn’t suddenly allow us to use it.
We don’t, unless we can formally specify what “my” means. Until then we are talking about the truth of statements “Red light was observed” and “Red light was not observed”. And if our mathematical model doesn’t track any other information, then for the sake of this mathematical model all the robots that observe red are the same entity. The whole point of math is that it’s true not just for one specific person but for everyone satisfying the conditions. That’s what makes it useful.
Suppose I’m observing a dice roll and I wonder what is the probability that the result will be “4”. The mathematical model that tells me that it’s 1⁄6 also tells the same to you, or any other person. It tells the same fact about any other roll of any other dice with similar relevant properties.
I was worried that you would go there. There is only one lawful way to define probability in Sleeping Beauty problem. The crux of disagreement between thirders and halfers is whether this awakening should be modeled as random awakening between three equiprobable mutually exclusive outcomes: Heads&Monday, Tails&Monday and Tails&Tuesday. And there is one correct answer to it—no it should not. We can formally prove that if Tails&Monday awakening is always followed by Tails&Tuesday awakening, then they are not mutually exclusive.
I’m still reading your Sleeping Beauty posts, so I can’t properly respond to all your points yet. I’ll say though that I don’t think the usefulness or validity of the 0P/1P idea hinges on whether it helps with anthropics or Sleeping Beauty (note that I marked the Sleeping Beauty idea as speculation).
This is frustrating because I’m trying hard here to specify exactly what I mean by the stuff I call “1st Person”. It’s a different interpretation of classical logic. The different interpretation refers to the use of sets of experiences vs the use of sets of worlds in the semantics. Within a particular interpretation, you can lawfully use all the same logic, math, probability, etc… because you’re just switching out which set you’re using for the semantics. What makes the interpretations different practically comes from wiring them up differently in the robot—is it reasoning about its world model or about its sensor values? It sounds like you think the 1P interpretation is superfluous, is that right?
Rephrasing it this way doesn’t change the fact that the observer has not yet been formally specified.
I agree that that is an important and useful aspect of what I would call 0P-mathematics. But I think it’s also useful to be able to build a robot that also has a mode of reasoning where it can reason about its sensor values in a straightforward way.
I agree. Or I’d even say that the usefulness and validity of the 0P/1P idea is reversely correlated with their applications to “anthropic reasoning”.
Yes, I see that and I’m sorry. This kind of warning isn’t aimed at you in particular, it’s a result of my personal pain how people in general tend to misuse such ideas.
I’m not sure. It seems that one of them has to be reducible to the other, though probably in a opposite direction. Isn’t having a world model also a type of experience?
Like, consider two events: “one particular robot observes red” and “any robot observes red”. It seems that the first one is 1st person perspective, while the second is 0th person perspective in your terms. When a robot observes red with its own sensor it concludes that it in particular has observed red and deduces that it means that any robot has observed red. Observation leads to an update of a world model. But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?
Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It’s about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it’s about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.
I guess, I feel that the 0P, 1P distinction doesn’t really carve math by its joints. But I’ll have to think more about it.
It is if the robot has introspective abilities, which is not necessarily the case. But yes, it is generally possible to convert 0P statements to 1P statements and vice-versa. My claim is essentially that this is not an isomorphism.
The 1P semantics is a framework that can be used to design and reason about agents. Someone who thought of “you” as referring to something with a 1P perspective would want to think of it that way for those robots, but it wouldn’t be as helpful for the robots themselves to be designed this way if they worked like that.
I think this is wrong, and that there is a wholly 0P probability theory and a wholly 1P probability theory. Agents can have different 0P probabilities because they don’t necessarily have the same priors, models, or seen the same evidence (yes seeing evidence would be a 1P event, but this can (imperfectly) be converted into a 0P statement—which would essentially be adding a new axiom to the 0P theory).