Why should your beliefs about your own future actions alter your beliefs about what is in the box?
One might also ask “Why should your beliefs about your own future actions alter your beliefs about Omega’s past actions?”
Your future actions are coupled to Omega’s past actions through the problem’s premise—Omega is supposed to be very skilled at predicting you. This might be unusual, but it’s not that odd.
Suppose someone may or may not have a gene causing a taste for cilantro, and they are tasting cilantro. As they get information about their own tastes, they also revise their estimate of their own ancestry. Is that paradoxical?
Suppose someone may or may not have a gene causing a taste for cilantro, and they are tasting cilantro. As they get information about their own tastes, they also revise their estimate of their own ancestry. Is that paradoxical?
That’s not analogous, because you don’t choose (directly) whether you like cilantro. In contrast, I am choosing what box to take, which means I’m choosing what Omega’s past decision was. That can make sense, but only if you can accept that you shift between e.g. Everett branches as your computation unfolds.
Omega’s actions depend, or refer, or are parameterized by your behavior. Behaviour here is a mathematical object specifying the meaning of for example the statement “it’s how he’ll act in that situation”. This phrase can be meaningfully spoken at any time, independently on whether it’s possible for you to be in this situation in the world in which this phrase is spoken. This is the sense in which your decisions are timeless: it’s not so much you that is living in Platonia, as the referents of the questions about your behavior, future or past or hypothetical. These referents behave in a certain lawful way that is related to your behavior.
Sorry for going back to the basics on this, but: what does my choice actually mean, then? If there is some mathematical object defining “how I’ll act in a given situation”, what does my choice mean in terms of that object? Am I e.g. simply learning the output of that object?
Rather, the predictions are learning about your action. Look for what happens in physics, not for what “happens” in Platonia. That you’ll believe something means that there are events in place that’ll cause the belief to appear, and these events can be observed. When Omega reconstructs your decision, it answers a specific question about a property of the real world. The meaning of this question is your actual decision. This decision is connected to Omega’s mind by processes that propagate evidence, mostly from common sources. Mathematical semantics is just a way of representing properties of this process, for example the fact that a correctly predicted algorithm will behave the same as the original algorithm of which this one is a prediction.
The trouble with decision-making is that you care about the effect of your choice, and so about the effect of predictions (or future reconstructions) of your choice. It’s intuitively confusing that you can affect the past this way. This is a kind of problem of free will for the evidence about your future decisions found in the past. Just as it’s not immediate to see how you can have free will in deterministic world, it’s not immediate how the evidence about your future decisions can have the same kind of free will. Even more so, the confusion is deepened by the fact that the evidence about your future action, while having free will, is bound to perform exactly the same action as you yourself will in the future.
Since both you and predictions about you seem to entertain a kind of free will, it’s tempting to say that you are in fact all of them. But only the real you is most accurate, and others are reconstructed under the goal of getting as close as possible to the original.
The analogy is between having imperfect information of your future choice (while choosing), and imperfect information of your own tastes (while tasting).
None of this Newcomb’s problem stuff is relevant to quantum physics; even if we were living in a non-quantum, Newtonian world, we would have all the same experiences related to this problem.
The analogy is between having imperfect information of your future choice (while choosing), and imperfect information of your own tastes (while tasting).
That still doesn’t work. In making my choice (per AnnaSalamon’s drawing of Eliezer_Yudkowsky’s TDT causal model for Newcomb’s problem), I get to disconnect my decision-making process from its parents (parents not shown in AnnaSalamon’s drawing because they’d be disconnected anyway). I do not disconnect the influence of my genes when learning whether I like cilantro.
Moreover, while I can present myself with reasons to change my mind, I cannot (knowingly) feed myself relevant evidence that I do not like cilantro, arbitrarily changing the probability of a given past ancestry.
None of this Newcomb’s problem stuff is relevant to quantum physics; even if we were living in a non-quantum, Newtonian world, we would have all the same experiences related to this problem.
Yes, the Everett branch concept isn’t necessary, but still, the weirdness of the implications of the situation do indeed apply to whatever physical laws contain it.
Why should your beliefs about your own future actions alter your beliefs about what is in the box?
One might also ask “Why should your beliefs about your own future actions alter your beliefs about Omega’s past actions?”
Your future actions are coupled to Omega’s past actions through the problem’s premise—Omega is supposed to be very skilled at predicting you. This might be unusual, but it’s not that odd.
Suppose someone may or may not have a gene causing a taste for cilantro, and they are tasting cilantro. As they get information about their own tastes, they also revise their estimate of their own ancestry. Is that paradoxical?
That’s not analogous, because you don’t choose (directly) whether you like cilantro. In contrast, I am choosing what box to take, which means I’m choosing what Omega’s past decision was. That can make sense, but only if you can accept that you shift between e.g. Everett branches as your computation unfolds.
Omega’s actions depend, or refer, or are parameterized by your behavior. Behaviour here is a mathematical object specifying the meaning of for example the statement “it’s how he’ll act in that situation”. This phrase can be meaningfully spoken at any time, independently on whether it’s possible for you to be in this situation in the world in which this phrase is spoken. This is the sense in which your decisions are timeless: it’s not so much you that is living in Platonia, as the referents of the questions about your behavior, future or past or hypothetical. These referents behave in a certain lawful way that is related to your behavior.
Sorry for going back to the basics on this, but: what does my choice actually mean, then? If there is some mathematical object defining “how I’ll act in a given situation”, what does my choice mean in terms of that object? Am I e.g. simply learning the output of that object?
Rather, the predictions are learning about your action. Look for what happens in physics, not for what “happens” in Platonia. That you’ll believe something means that there are events in place that’ll cause the belief to appear, and these events can be observed. When Omega reconstructs your decision, it answers a specific question about a property of the real world. The meaning of this question is your actual decision. This decision is connected to Omega’s mind by processes that propagate evidence, mostly from common sources. Mathematical semantics is just a way of representing properties of this process, for example the fact that a correctly predicted algorithm will behave the same as the original algorithm of which this one is a prediction.
The trouble with decision-making is that you care about the effect of your choice, and so about the effect of predictions (or future reconstructions) of your choice. It’s intuitively confusing that you can affect the past this way. This is a kind of problem of free will for the evidence about your future decisions found in the past. Just as it’s not immediate to see how you can have free will in deterministic world, it’s not immediate how the evidence about your future decisions can have the same kind of free will. Even more so, the confusion is deepened by the fact that the evidence about your future action, while having free will, is bound to perform exactly the same action as you yourself will in the future.
Since both you and predictions about you seem to entertain a kind of free will, it’s tempting to say that you are in fact all of them. But only the real you is most accurate, and others are reconstructed under the goal of getting as close as possible to the original.
The analogy is between having imperfect information of your future choice (while choosing), and imperfect information of your own tastes (while tasting).
None of this Newcomb’s problem stuff is relevant to quantum physics; even if we were living in a non-quantum, Newtonian world, we would have all the same experiences related to this problem.
That still doesn’t work. In making my choice (per AnnaSalamon’s drawing of Eliezer_Yudkowsky’s TDT causal model for Newcomb’s problem), I get to disconnect my decision-making process from its parents (parents not shown in AnnaSalamon’s drawing because they’d be disconnected anyway). I do not disconnect the influence of my genes when learning whether I like cilantro.
Moreover, while I can present myself with reasons to change my mind, I cannot (knowingly) feed myself relevant evidence that I do not like cilantro, arbitrarily changing the probability of a given past ancestry.
Yes, the Everett branch concept isn’t necessary, but still, the weirdness of the implications of the situation do indeed apply to whatever physical laws contain it.