Nohow. Your decision procedure’s output leads to money being put into one box, and also to you choosing that box, or little money being put into second box, and also to you choosing both.
If you ever anticipate some sort of prisoner dilemma between identical instances of your decision procedure (that’s what Newcomb’s problem is) you adjust the decision procedure accordingly. It doesn’t matter in the slightest to the prisoner dilemmas whenever there is temporal separation between instances of decision procedure, or spatial separation; nothing changes if the Omega doesn’t learn your decision directly, yet creates items inside boxes immediately before they are opened. Nothing even changes if the Omega hears your choice and then puts items into the boxes. In all of those cases, a run of decision procedure leads to an outcome.
prisoner dilemma between identical instances of your decision procedure (that’s what Newcomb’s problem is)
I’m not so sure. The output of your decision procedure is the same as the output of Omega’s prediction procedure, but that doesn’t tell you how algorithmically similar they are.
Well, if you are to do causal decision theory, you must also be in a causal world (or at least assume you are in causal world), and in the causal world, correlation of omega’s decisions with yours implies either coincidence or some causation—either omega’s choices cause your choices, your choices cause omega’s choices, or there is a common cause to both yours and omega’s choices. The common cause could be the decision procedure, could be the childhood event that makes a person adopt the decision procedure, etc. In the latter case, it’s not even a question of decision theory. The choice of box been already made—by chance, or by parents, or by someone who convinced you to one/two box. From that point on, it has been mechanistically propagating by laws of physics, and affected both omega and you. (and even before that point, it has been mechanistically propagating ever since big bang).
The huge problem about application of decision theory is the idea of immaterial soul that’s doing the deciding however it wishes. That’s not how things are. There are causes to the decisions. Using causal decision theory together with the idea of immaterial soul that’s deciding from outside of the causal universe, leads to a fairly inconsistent world.
My understanding is that tabooing is usually “safe”.
If a concept is well-defined and non-atomic then you can break it down into its definition, and the argument will still be valid.
If a concept is not well-defined then why are you using it?
So the only reasons for not-tabooing something would seem to be:
My above argument is confused somehow (e.g. the concepts of “well-defined” or “atomic” are themselves not well-defined and need tabooing)
For convenience—someone cal effectively stall an argument by asking you to taboo every word
The concepts are atomic
Treating control (and to a lesser extent causality) as atomic seems to imply a large inferential distance from the worldview popular on LW. Is there a sequence or something else I can read to see how to get to there from here?
Refusing to taboo may be a good idea if you don’t know how, and using the opaque concept gives you better results (in the intended sense) than application of the best available theory of how it works. (This is different from declaring understanding of a concept undesirable or impossible in principle.)
Same reason we usually play “rationalist’s taboo” around here: to separate the denotations of the terms from their connotations and operate on the former.
If we taboo terms like cause and control, how could we state this?
Nohow. Your decision procedure’s output leads to money being put into one box, and also to you choosing that box, or little money being put into second box, and also to you choosing both.
If you ever anticipate some sort of prisoner dilemma between identical instances of your decision procedure (that’s what Newcomb’s problem is) you adjust the decision procedure accordingly. It doesn’t matter in the slightest to the prisoner dilemmas whenever there is temporal separation between instances of decision procedure, or spatial separation; nothing changes if the Omega doesn’t learn your decision directly, yet creates items inside boxes immediately before they are opened. Nothing even changes if the Omega hears your choice and then puts items into the boxes. In all of those cases, a run of decision procedure leads to an outcome.
I’m not so sure. The output of your decision procedure is the same as the output of Omega’s prediction procedure, but that doesn’t tell you how algorithmically similar they are.
Well, if you are to do causal decision theory, you must also be in a causal world (or at least assume you are in causal world), and in the causal world, correlation of omega’s decisions with yours implies either coincidence or some causation—either omega’s choices cause your choices, your choices cause omega’s choices, or there is a common cause to both yours and omega’s choices. The common cause could be the decision procedure, could be the childhood event that makes a person adopt the decision procedure, etc. In the latter case, it’s not even a question of decision theory. The choice of box been already made—by chance, or by parents, or by someone who convinced you to one/two box. From that point on, it has been mechanistically propagating by laws of physics, and affected both omega and you. (and even before that point, it has been mechanistically propagating ever since big bang).
The huge problem about application of decision theory is the idea of immaterial soul that’s doing the deciding however it wishes. That’s not how things are. There are causes to the decisions. Using causal decision theory together with the idea of immaterial soul that’s deciding from outside of the causal universe, leads to a fairly inconsistent world.
You can’t. But why would we want to taboo those terms?
The inability to taboo a term can indicate that the term’s meaning is not sufficiently clear and well-thought out.
I want to know what they mean in context. I feel I cannot evaluate the statement otherwise; I am not sure what it is telling me to expect.
My understanding is that tabooing is usually “safe”.
If a concept is well-defined and non-atomic then you can break it down into its definition, and the argument will still be valid.
If a concept is not well-defined then why are you using it?
So the only reasons for not-tabooing something would seem to be:
My above argument is confused somehow (e.g. the concepts of “well-defined” or “atomic” are themselves not well-defined and need tabooing)
For convenience—someone cal effectively stall an argument by asking you to taboo every word
The concepts are atomic
Treating control (and to a lesser extent causality) as atomic seems to imply a large inferential distance from the worldview popular on LW. Is there a sequence or something else I can read to see how to get to there from here?
Refusing to taboo may be a good idea if you don’t know how, and using the opaque concept gives you better results (in the intended sense) than application of the best available theory of how it works. (This is different from declaring understanding of a concept undesirable or impossible in principle.)
Yes, that makes sense. Do you think this applies here?
Same reason we usually play “rationalist’s taboo” around here: to separate the denotations of the terms from their connotations and operate on the former.