Nohow. Your decision procedure’s output leads to money being put into one box, and also to you choosing that box, or little money being put into second box, and also to you choosing both.
If you ever anticipate some sort of prisoner dilemma between identical instances of your decision procedure (that’s what Newcomb’s problem is) you adjust the decision procedure accordingly. It doesn’t matter in the slightest to the prisoner dilemmas whenever there is temporal separation between instances of decision procedure, or spatial separation; nothing changes if the Omega doesn’t learn your decision directly, yet creates items inside boxes immediately before they are opened. Nothing even changes if the Omega hears your choice and then puts items into the boxes. In all of those cases, a run of decision procedure leads to an outcome.
prisoner dilemma between identical instances of your decision procedure (that’s what Newcomb’s problem is)
I’m not so sure. The output of your decision procedure is the same as the output of Omega’s prediction procedure, but that doesn’t tell you how algorithmically similar they are.
Well, if you are to do causal decision theory, you must also be in a causal world (or at least assume you are in causal world), and in the causal world, correlation of omega’s decisions with yours implies either coincidence or some causation—either omega’s choices cause your choices, your choices cause omega’s choices, or there is a common cause to both yours and omega’s choices. The common cause could be the decision procedure, could be the childhood event that makes a person adopt the decision procedure, etc. In the latter case, it’s not even a question of decision theory. The choice of box been already made—by chance, or by parents, or by someone who convinced you to one/two box. From that point on, it has been mechanistically propagating by laws of physics, and affected both omega and you. (and even before that point, it has been mechanistically propagating ever since big bang).
The huge problem about application of decision theory is the idea of immaterial soul that’s doing the deciding however it wishes. That’s not how things are. There are causes to the decisions. Using causal decision theory together with the idea of immaterial soul that’s deciding from outside of the causal universe, leads to a fairly inconsistent world.
Nohow. Your decision procedure’s output leads to money being put into one box, and also to you choosing that box, or little money being put into second box, and also to you choosing both.
If you ever anticipate some sort of prisoner dilemma between identical instances of your decision procedure (that’s what Newcomb’s problem is) you adjust the decision procedure accordingly. It doesn’t matter in the slightest to the prisoner dilemmas whenever there is temporal separation between instances of decision procedure, or spatial separation; nothing changes if the Omega doesn’t learn your decision directly, yet creates items inside boxes immediately before they are opened. Nothing even changes if the Omega hears your choice and then puts items into the boxes. In all of those cases, a run of decision procedure leads to an outcome.
I’m not so sure. The output of your decision procedure is the same as the output of Omega’s prediction procedure, but that doesn’t tell you how algorithmically similar they are.
Well, if you are to do causal decision theory, you must also be in a causal world (or at least assume you are in causal world), and in the causal world, correlation of omega’s decisions with yours implies either coincidence or some causation—either omega’s choices cause your choices, your choices cause omega’s choices, or there is a common cause to both yours and omega’s choices. The common cause could be the decision procedure, could be the childhood event that makes a person adopt the decision procedure, etc. In the latter case, it’s not even a question of decision theory. The choice of box been already made—by chance, or by parents, or by someone who convinced you to one/two box. From that point on, it has been mechanistically propagating by laws of physics, and affected both omega and you. (and even before that point, it has been mechanistically propagating ever since big bang).
The huge problem about application of decision theory is the idea of immaterial soul that’s doing the deciding however it wishes. That’s not how things are. There are causes to the decisions. Using causal decision theory together with the idea of immaterial soul that’s deciding from outside of the causal universe, leads to a fairly inconsistent world.