I will admit I’ve had similar thoughts on these problems:
On the smoking lesion, my reaction to the problem was, “Having an urge to smoke is evidence that I have the gene, but having a strong enough urge that I actually go through with smoking is even stronger evidence. Therefore, my decision to smoke determines whether I have the gene.”
Similarly, on Newcomb’s problem, let’s say I shift between thinking I should one-box and two-box. That implies that when I temporarily settle on “one box”, there should be money in the box, but then as I shift to “two box”, the money somehow goes away, all of which implies a causal influence on the box, after Omega has decided to put money in it.
Btw, I suggested a different causal graph for looking at Newcomb’s problem.
let’s say I shift between thinking I should one-box and two-box. That implies that when I temporarily settle on “one box”, there should be money in the box, but then as I shift to “two box”, the money somehow goes away
Huh? If you end up two-boxing, then there’s no money in the box even when you’re thinking of one-boxing.
But at the time you plan to one-box, you should believe there’s money. But then when you plan to two-box, you should switch to believing there’s no money. But why should you update your beliefs when all that changed was your intent?
But at the time you plan to one-box, you should believe there’s money.
Maybe you shouldn’t. Money is there if you actually take one box, not if you merely plan to. After you do take one box, or after you precommit to taking one box in a way that you know you won’t alter, only then you should believe there is money.
Okay, then let me put it this way: up until I perform the final, irreversible act of choosing one-box or two-box, I can make estimates of what conclusion I will arrive at.
For a simple example, if I’m going to compute 12957 + 19234, then before I work out the addition, I might first estimate it as being between 30,000 and 40,000. As I get closer, I might provisionally estimate it to a precision of one integer.
I can do these same kinds of estimations of what I will decide is the best option for me. I can think, “as of now, two-boxing looks like a good choice, and what I’ll probably do (p=.52), but I need to consider some other stuff first”. And during that estimation, I estimate that when I’ll two-box, there will be no money in box B (because Omega has forseen this line of reasoning). As I converge on the final decision, this estimate of what I will do alters -- but why should that alter my beliefs about the box?
The only situation consistent with these predicates is one in which I really do switch between between worlds with box B money and those without, as I contemplate my decision—which is the real sticking point for me.
Why should your beliefs about your own future actions alter your beliefs about what is in the box?
One might also ask “Why should your beliefs about your own future actions alter your beliefs about Omega’s past actions?”
Your future actions are coupled to Omega’s past actions through the problem’s premise—Omega is supposed to be very skilled at predicting you. This might be unusual, but it’s not that odd.
Suppose someone may or may not have a gene causing a taste for cilantro, and they are tasting cilantro. As they get information about their own tastes, they also revise their estimate of their own ancestry. Is that paradoxical?
Suppose someone may or may not have a gene causing a taste for cilantro, and they are tasting cilantro. As they get information about their own tastes, they also revise their estimate of their own ancestry. Is that paradoxical?
That’s not analogous, because you don’t choose (directly) whether you like cilantro. In contrast, I am choosing what box to take, which means I’m choosing what Omega’s past decision was. That can make sense, but only if you can accept that you shift between e.g. Everett branches as your computation unfolds.
Omega’s actions depend, or refer, or are parameterized by your behavior. Behaviour here is a mathematical object specifying the meaning of for example the statement “it’s how he’ll act in that situation”. This phrase can be meaningfully spoken at any time, independently on whether it’s possible for you to be in this situation in the world in which this phrase is spoken. This is the sense in which your decisions are timeless: it’s not so much you that is living in Platonia, as the referents of the questions about your behavior, future or past or hypothetical. These referents behave in a certain lawful way that is related to your behavior.
Sorry for going back to the basics on this, but: what does my choice actually mean, then? If there is some mathematical object defining “how I’ll act in a given situation”, what does my choice mean in terms of that object? Am I e.g. simply learning the output of that object?
Rather, the predictions are learning about your action. Look for what happens in physics, not for what “happens” in Platonia. That you’ll believe something means that there are events in place that’ll cause the belief to appear, and these events can be observed. When Omega reconstructs your decision, it answers a specific question about a property of the real world. The meaning of this question is your actual decision. This decision is connected to Omega’s mind by processes that propagate evidence, mostly from common sources. Mathematical semantics is just a way of representing properties of this process, for example the fact that a correctly predicted algorithm will behave the same as the original algorithm of which this one is a prediction.
The trouble with decision-making is that you care about the effect of your choice, and so about the effect of predictions (or future reconstructions) of your choice. It’s intuitively confusing that you can affect the past this way. This is a kind of problem of free will for the evidence about your future decisions found in the past. Just as it’s not immediate to see how you can have free will in deterministic world, it’s not immediate how the evidence about your future decisions can have the same kind of free will. Even more so, the confusion is deepened by the fact that the evidence about your future action, while having free will, is bound to perform exactly the same action as you yourself will in the future.
Since both you and predictions about you seem to entertain a kind of free will, it’s tempting to say that you are in fact all of them. But only the real you is most accurate, and others are reconstructed under the goal of getting as close as possible to the original.
The analogy is between having imperfect information of your future choice (while choosing), and imperfect information of your own tastes (while tasting).
None of this Newcomb’s problem stuff is relevant to quantum physics; even if we were living in a non-quantum, Newtonian world, we would have all the same experiences related to this problem.
The analogy is between having imperfect information of your future choice (while choosing), and imperfect information of your own tastes (while tasting).
That still doesn’t work. In making my choice (per AnnaSalamon’s drawing of Eliezer_Yudkowsky’s TDT causal model for Newcomb’s problem), I get to disconnect my decision-making process from its parents (parents not shown in AnnaSalamon’s drawing because they’d be disconnected anyway). I do not disconnect the influence of my genes when learning whether I like cilantro.
Moreover, while I can present myself with reasons to change my mind, I cannot (knowingly) feed myself relevant evidence that I do not like cilantro, arbitrarily changing the probability of a given past ancestry.
None of this Newcomb’s problem stuff is relevant to quantum physics; even if we were living in a non-quantum, Newtonian world, we would have all the same experiences related to this problem.
Yes, the Everett branch concept isn’t necessary, but still, the weirdness of the implications of the situation do indeed apply to whatever physical laws contain it.
I will admit I’ve had similar thoughts on these problems:
On the smoking lesion, my reaction to the problem was, “Having an urge to smoke is evidence that I have the gene, but having a strong enough urge that I actually go through with smoking is even stronger evidence. Therefore, my decision to smoke determines whether I have the gene.”
Similarly, on Newcomb’s problem, let’s say I shift between thinking I should one-box and two-box. That implies that when I temporarily settle on “one box”, there should be money in the box, but then as I shift to “two box”, the money somehow goes away, all of which implies a causal influence on the box, after Omega has decided to put money in it.
Btw, I suggested a different causal graph for looking at Newcomb’s problem.
Huh? If you end up two-boxing, then there’s no money in the box even when you’re thinking of one-boxing.
But at the time you plan to one-box, you should believe there’s money. But then when you plan to two-box, you should switch to believing there’s no money. But why should you update your beliefs when all that changed was your intent?
Maybe you shouldn’t. Money is there if you actually take one box, not if you merely plan to. After you do take one box, or after you precommit to taking one box in a way that you know you won’t alter, only then you should believe there is money.
Okay, then let me put it this way: up until I perform the final, irreversible act of choosing one-box or two-box, I can make estimates of what conclusion I will arrive at.
For a simple example, if I’m going to compute 12957 + 19234, then before I work out the addition, I might first estimate it as being between 30,000 and 40,000. As I get closer, I might provisionally estimate it to a precision of one integer.
I can do these same kinds of estimations of what I will decide is the best option for me. I can think, “as of now, two-boxing looks like a good choice, and what I’ll probably do (p=.52), but I need to consider some other stuff first”. And during that estimation, I estimate that when I’ll two-box, there will be no money in box B (because Omega has forseen this line of reasoning). As I converge on the final decision, this estimate of what I will do alters -- but why should that alter my beliefs about the box?
The only situation consistent with these predicates is one in which I really do switch between between worlds with box B money and those without, as I contemplate my decision—which is the real sticking point for me.
Why should your beliefs about your own future actions alter your beliefs about what is in the box?
One might also ask “Why should your beliefs about your own future actions alter your beliefs about Omega’s past actions?”
Your future actions are coupled to Omega’s past actions through the problem’s premise—Omega is supposed to be very skilled at predicting you. This might be unusual, but it’s not that odd.
Suppose someone may or may not have a gene causing a taste for cilantro, and they are tasting cilantro. As they get information about their own tastes, they also revise their estimate of their own ancestry. Is that paradoxical?
That’s not analogous, because you don’t choose (directly) whether you like cilantro. In contrast, I am choosing what box to take, which means I’m choosing what Omega’s past decision was. That can make sense, but only if you can accept that you shift between e.g. Everett branches as your computation unfolds.
Omega’s actions depend, or refer, or are parameterized by your behavior. Behaviour here is a mathematical object specifying the meaning of for example the statement “it’s how he’ll act in that situation”. This phrase can be meaningfully spoken at any time, independently on whether it’s possible for you to be in this situation in the world in which this phrase is spoken. This is the sense in which your decisions are timeless: it’s not so much you that is living in Platonia, as the referents of the questions about your behavior, future or past or hypothetical. These referents behave in a certain lawful way that is related to your behavior.
Sorry for going back to the basics on this, but: what does my choice actually mean, then? If there is some mathematical object defining “how I’ll act in a given situation”, what does my choice mean in terms of that object? Am I e.g. simply learning the output of that object?
Rather, the predictions are learning about your action. Look for what happens in physics, not for what “happens” in Platonia. That you’ll believe something means that there are events in place that’ll cause the belief to appear, and these events can be observed. When Omega reconstructs your decision, it answers a specific question about a property of the real world. The meaning of this question is your actual decision. This decision is connected to Omega’s mind by processes that propagate evidence, mostly from common sources. Mathematical semantics is just a way of representing properties of this process, for example the fact that a correctly predicted algorithm will behave the same as the original algorithm of which this one is a prediction.
The trouble with decision-making is that you care about the effect of your choice, and so about the effect of predictions (or future reconstructions) of your choice. It’s intuitively confusing that you can affect the past this way. This is a kind of problem of free will for the evidence about your future decisions found in the past. Just as it’s not immediate to see how you can have free will in deterministic world, it’s not immediate how the evidence about your future decisions can have the same kind of free will. Even more so, the confusion is deepened by the fact that the evidence about your future action, while having free will, is bound to perform exactly the same action as you yourself will in the future.
Since both you and predictions about you seem to entertain a kind of free will, it’s tempting to say that you are in fact all of them. But only the real you is most accurate, and others are reconstructed under the goal of getting as close as possible to the original.
The analogy is between having imperfect information of your future choice (while choosing), and imperfect information of your own tastes (while tasting).
None of this Newcomb’s problem stuff is relevant to quantum physics; even if we were living in a non-quantum, Newtonian world, we would have all the same experiences related to this problem.
That still doesn’t work. In making my choice (per AnnaSalamon’s drawing of Eliezer_Yudkowsky’s TDT causal model for Newcomb’s problem), I get to disconnect my decision-making process from its parents (parents not shown in AnnaSalamon’s drawing because they’d be disconnected anyway). I do not disconnect the influence of my genes when learning whether I like cilantro.
Moreover, while I can present myself with reasons to change my mind, I cannot (knowingly) feed myself relevant evidence that I do not like cilantro, arbitrarily changing the probability of a given past ancestry.
Yes, the Everett branch concept isn’t necessary, but still, the weirdness of the implications of the situation do indeed apply to whatever physical laws contain it.