You seem to be fighting the hypothetical, but I don’t know if you’re doing it out of mistrust or because some background would be helpful. I’ll assume helpful background would be helpful… :-)
A program could be designed to (1) search for relevant sensory data within a larger context, (2) derive a mixed strategy given the input data, (3) gets more bits of salt from local thermal fluctuations than log2(number of possible actions), (4) drop the salt into a pseudo-random number generator over its derived mixed strategy, and (5) output whatever falls out as its action. This rough algorithm seems strongly deterministic in some ways, and yet also strongly reminiscent of “choice” in others.
This formulation reduces the “magic” of Omega to predicting the relatively fixed elements of the agent (ie, steps 1, 2, and 4) which seems roughly plausible as a matter of psychology and input knowledge and so on, and also either (A) knowing from this that the strategy that will be derived isn’t actually mixed so the salt is irrelevant, or else (B) having access/control of the salt in step 3.
In AI design, steps 1 and 2 are under the programmer’s control to some degree. Some ways of writing the program might make the AI more or less tractable/benevolent/functional/wise and it seems like it would be good to know which ways are likely to produce better outcomes before any such AI is built and achieves takeoff rather than after. Hence the interest in this thought experiment as an extreme test case. The question is not whether step 3 is pragmatically possible for an imaginary Omega to hack in real life. The question is how to design steps 1 and 2 in toy scenarios where the program’s ability to decide how to pre-commit and self-edit are the central task, so that harder scenarios can be attacked as “similar to a simpler solved problem”.
If you say “Your only choices are flipping a coin or saying a predetermined answer” you’re dodging the real question. You can be dragged back to the question by simply positing “Omega predicts the coin flip, what then?” If there’s time and room for lots and lots of words (rather than just seven words) then another way to bring attention back to the question is to explain about fighting the hypothetical, try to build rapport, see if you can learn to play along so that you can help advance a useful intellectual project.
If you still “don’t get it”, then please, at least don’t clog up the channel. If you do get it, please offer better criticism. Like, if you know of a different but better thought experiment where effectively-optimizing self-modifying pre-commitment is the central feature of study, that would be useful.
I don’t fight any hypothesis. If backwards causality is possible, one-boxing obviously wins.
But backwards causality cannot exist in reality, and therefore my decision cannot affect Omega’s prediction of that decision. I would be very surprised if the large majority of LW posters would disagree with that statement; most of them seem to just ignore this level of the problem.
A program could be designed to (1) search for relevant sensory data within a larger context, (2) derive a mixed strategy given the input data, (3) gets more bits of salt from local thermal fluctuations than log2(number of possible actions), (4) drop the salt into a pseudo-random number generator over its derived mixed strategy, and (5) output whatever falls out as its action. This rough algorithm seems strongly deterministic in some ways, and yet also strongly reminiscent of “choice” in others.
This formulation reduces the “magic” of Omega to predicting the relatively fixed elements of the agent (ie, steps 1, 2, and 4) which seems roughly plausible as a matter of psychology and input knowledge and so on, and also either (A) knowing from this that the strategy that will be derived isn’t actually mixed so the salt is irrelevant, or else (B) having access/control of the salt in step 3.
In this example, the correct solution would not be to “choose” to one-box, but to choose to adopt a strategy that causes you to one-box before Omega makes its prediction, and therefore before you know you’re playing Newcomb. This is not Newcomb anymore, this is a new problem. In this new problem, CDT will decide to adopt a strategy that causes it to one-box (it will precommit).
In this new problem, CDT will decide to adopt a strategy that causes it to one-box (it will precommit).
Similarly, if a CDT agent is facing no immediate decision problem but has the capability to self modify it will modify itself to an agent that implements a new decision theory (call it, for example, CDT++). The self modified agent will then behave as if it implements a Reflective Decision Theory (UDT, TDT, etc) for the purpose of all influence over the universe after the time of self modification but like CDT for the purpose of all influence before the time of self modification. This means roughly that it will behave as if it had made all the correct ‘precommitments’ at that time. It’ll then cooperate against equivalent agents in prisoner’s dilemmas and one box on future Newcomb’s problems unless Omega says “Oh, and I made the prediction and filled the boxes back before you self modified away from CDT, I’m just showing them to you now”.
A CDT agent will do this, if it can be proven that it cannot make worse decisions after the modification than if it had not modified itself. I actually tried to find literature on this a while back, but couldn’t find any, so I assigned a very low probability to the possibility that this could be proven. Seeing how you seem to be familiar with the topic, do you know of any?
A CDT agent will do this, if it can be proven that it cannot make worse decisions after the modification than if it had not modified itself. I actually tried to find literature on this a while back, but couldn’t find any, so I assigned a very low probability to the possibility that this could be proven. Seeing how you seem to be familiar with the topic, do you know of any?
I am somewhat familiar with the topic but note that I am most familiar with the work that has already moved past CDT (ie. considers CDT irrational and inferior to a reflective decision theory along the lines of TDT or UDT). Thus far nobody has got around to formally writing up a “What CDT self modifies to” paper that I’m aware of (I wish they would!). It would be interesting to see what someone coming from the assumption that CDT is sane could come up with. Again I’m unfamiliar with such attempts but in this case that is far less evidence about such things existing.
I wasn’t asking for a concrete alternative for CDT. If anything, I’m interested in a proof that such a decision theory can possibly exist. Because trying to find an alternative when you haven’t proven this seems like a task with a very low chance of success.
I wasn’t asking for a concrete alternative for CDT.
I wasn’t offering alternatives—I was looking specifically at what CDT will inevitably self modify into (which is itself not optimal—just what CDT will do). The mention of alternatives was to convey to you that what I say on the subject and what I refer to would require making inferential steps that you have indicated you aren’t likely to make.
Incidentally, proving that CDT will (given the option) modify into something else is a very different thing than proving that there is a better alternative to CDT. Either could be true without implying the other.
That is true, and if you cannot prove that such a decision theory exists, then CDT modifying itself is not the necessarily correct answer to meta-Newcomb, correct?
You seem to be fighting the hypothetical, but I don’t know if you’re doing it out of mistrust or because some background would be helpful. I’ll assume helpful background would be helpful… :-)
A program could be designed to (1) search for relevant sensory data within a larger context, (2) derive a mixed strategy given the input data, (3) gets more bits of salt from local thermal fluctuations than log2(number of possible actions), (4) drop the salt into a pseudo-random number generator over its derived mixed strategy, and (5) output whatever falls out as its action. This rough algorithm seems strongly deterministic in some ways, and yet also strongly reminiscent of “choice” in others.
This formulation reduces the “magic” of Omega to predicting the relatively fixed elements of the agent (ie, steps 1, 2, and 4) which seems roughly plausible as a matter of psychology and input knowledge and so on, and also either (A) knowing from this that the strategy that will be derived isn’t actually mixed so the salt is irrelevant, or else (B) having access/control of the salt in step 3.
In AI design, steps 1 and 2 are under the programmer’s control to some degree. Some ways of writing the program might make the AI more or less tractable/benevolent/functional/wise and it seems like it would be good to know which ways are likely to produce better outcomes before any such AI is built and achieves takeoff rather than after. Hence the interest in this thought experiment as an extreme test case. The question is not whether step 3 is pragmatically possible for an imaginary Omega to hack in real life. The question is how to design steps 1 and 2 in toy scenarios where the program’s ability to decide how to pre-commit and self-edit are the central task, so that harder scenarios can be attacked as “similar to a simpler solved problem”.
If you say “Your only choices are flipping a coin or saying a predetermined answer” you’re dodging the real question. You can be dragged back to the question by simply positing “Omega predicts the coin flip, what then?” If there’s time and room for lots and lots of words (rather than just seven words) then another way to bring attention back to the question is to explain about fighting the hypothetical, try to build rapport, see if you can learn to play along so that you can help advance a useful intellectual project.
If you still “don’t get it”, then please, at least don’t clog up the channel. If you do get it, please offer better criticism. Like, if you know of a different but better thought experiment where effectively-optimizing self-modifying pre-commitment is the central feature of study, that would be useful.
I don’t fight any hypothesis. If backwards causality is possible, one-boxing obviously wins.
But backwards causality cannot exist in reality, and therefore my decision cannot affect Omega’s prediction of that decision. I would be very surprised if the large majority of LW posters would disagree with that statement; most of them seem to just ignore this level of the problem.
In this example, the correct solution would not be to “choose” to one-box, but to choose to adopt a strategy that causes you to one-box before Omega makes its prediction, and therefore before you know you’re playing Newcomb. This is not Newcomb anymore, this is a new problem. In this new problem, CDT will decide to adopt a strategy that causes it to one-box (it will precommit).
Similarly, if a CDT agent is facing no immediate decision problem but has the capability to self modify it will modify itself to an agent that implements a new decision theory (call it, for example, CDT++). The self modified agent will then behave as if it implements a Reflective Decision Theory (UDT, TDT, etc) for the purpose of all influence over the universe after the time of self modification but like CDT for the purpose of all influence before the time of self modification. This means roughly that it will behave as if it had made all the correct ‘precommitments’ at that time. It’ll then cooperate against equivalent agents in prisoner’s dilemmas and one box on future Newcomb’s problems unless Omega says “Oh, and I made the prediction and filled the boxes back before you self modified away from CDT, I’m just showing them to you now”.
A CDT agent will do this, if it can be proven that it cannot make worse decisions after the modification than if it had not modified itself. I actually tried to find literature on this a while back, but couldn’t find any, so I assigned a very low probability to the possibility that this could be proven. Seeing how you seem to be familiar with the topic, do you know of any?
I am somewhat familiar with the topic but note that I am most familiar with the work that has already moved past CDT (ie. considers CDT irrational and inferior to a reflective decision theory along the lines of TDT or UDT). Thus far nobody has got around to formally writing up a “What CDT self modifies to” paper that I’m aware of (I wish they would!). It would be interesting to see what someone coming from the assumption that CDT is sane could come up with. Again I’m unfamiliar with such attempts but in this case that is far less evidence about such things existing.
I wasn’t asking for a concrete alternative for CDT. If anything, I’m interested in a proof that such a decision theory can possibly exist. Because trying to find an alternative when you haven’t proven this seems like a task with a very low chance of success.
I wasn’t offering alternatives—I was looking specifically at what CDT will inevitably self modify into (which is itself not optimal—just what CDT will do). The mention of alternatives was to convey to you that what I say on the subject and what I refer to would require making inferential steps that you have indicated you aren’t likely to make.
Incidentally, proving that CDT will (given the option) modify into something else is a very different thing than proving that there is a better alternative to CDT. Either could be true without implying the other.
That is true, and if you cannot prove that such a decision theory exists, then CDT modifying itself is not the necessarily correct answer to meta-Newcomb, correct?