Consider programs that, given the description of a situation (possibly including a chain of events leading to it) and a list of possible actions, returns one of the actions. It doesn’t seem to be a stretch of language to say that such programs are “choosing”, because the way those programs react to their situation can be very similar to the way humans react (consider: finding the shortest path between two points; playing a turn-based strategy game, etc.).
Whether programs that are hard-coded to always return a particular answer “choose” or not is a very boring question of semantics, like “does a tree falling in the forest make a sound if no-one is around to hear it”.
Given a description of Newcomb’s problem, a well-written program will one-box, and a badly-written one will two-box. The difference between the two is not trivial.
Given a description of Newcomb’s problem, a well-written program will one-box, and a badly-written one will two-box. The difference between the two is not trivial.
I see your point now, and I agree with the quoted statement. However, there’s a difference between Newcomb, where you make your decision after Omega made its prediction, and “meta-Newcomb”, where you’re allowed to precommit before Omega makes its prediction, for example by choosing your programming. In meta-Newcomb, I don’t even have to consider being a computer program that can be simulated; I can just give my good friend Epsilon, who always exactly does what he is told, a gun and tell him to shoot me if I lie, then tell Omega I’m going to one-box, and then Omega would make its prediction. I would one-box, get $1,000,000 and, more importantly, not shot.
This is a decision that CDT would make, given the opportunity.
there’s a difference between Newcomb, where you make your decision after Omega made its prediction, and “meta-Newcomb”, where you’re allowed to precommit before Omega makes its prediction, for example by choosing your programming.
I agree that meta-Newcomb is not the same problem, and that in meta-newcomb CDT would precommit to one-box.
However, even in normal Newcomb, it’s possible to have agents that behave as if they had precommited when they realize precomitting would have been better for them. More specifically, in pseudocode:
function take_decision(information_about_world, actions):
for each action:
calculate the utility that an agent that always returns that action would have got
return the action that got the highest utility
There are some subtleties, notably about how to take the information about the world into account, but an agent built along this model should one-box on problems like Newcomb’s, while two-boxing in cases where Omega decides by flipping a coin.
(such an agent; however, doesn’t cooperate with itself in prisonner’s dilemma, you need a better agent for that)
You are 100% correct. However, if you say “it’s possible to have agents that behave as if they had precommited”, then you are not talking about what’s the best decision to make in this situation, but what’s the best decision theory to have in this situation, and that is, again, meta-Newcomb, because the decision which decision theory you’re going to follow is a decision you have to make before Omega makes its prediction. Switching to this decision theory after Omega makes its prediction doesn’t work, obviously, so this is not a solution for Newcomb.
I can just give my good friend Epsilon, who always exactly does what he is told, a gun and tell him to shoot me if I lie, then tell Omega I’m going to one-box, and then Omega would make its prediction. I would one-box, get $1,000,000 and, more importantly, not shot.
When I first read this I took it literally, as using Epsilon directly as a lie detector. That had some interesting potential side effects (like death) for a CDT agent. On second reading I take it to mean “Stay around with the gun until after everything is resolved and if I forswear myself kill me”. As a CDT agent you need to be sure that Epsilon will stay with the gun until you have abandoned the second box. If Epsilon just scans your thoughts, detects whether you are lying and then leaves then CDT will go ahead and take both boxes anyway. (It’s mind-boggling to think of agents that couldn’t even manage cooperation with themselves with $1m on the line and a truth oracle right there to help them!)
Yeah, I meant that Epsilon would shoot if you two-box after having said you would one-box. In the end, “Epsilon with a gun” is just a metaphor for / specific instance of precommitting, as is “computer program that can choose its programming”.
Consider programs that, given the description of a situation (possibly including a chain of events leading to it) and a list of possible actions, returns one of the actions. It doesn’t seem to be a stretch of language to say that such programs are “choosing”, because the way those programs react to their situation can be very similar to the way humans react (consider: finding the shortest path between two points; playing a turn-based strategy game, etc.).
Whether programs that are hard-coded to always return a particular answer “choose” or not is a very boring question of semantics, like “does a tree falling in the forest make a sound if no-one is around to hear it”.
Given a description of Newcomb’s problem, a well-written program will one-box, and a badly-written one will two-box. The difference between the two is not trivial.
I see your point now, and I agree with the quoted statement. However, there’s a difference between Newcomb, where you make your decision after Omega made its prediction, and “meta-Newcomb”, where you’re allowed to precommit before Omega makes its prediction, for example by choosing your programming. In meta-Newcomb, I don’t even have to consider being a computer program that can be simulated; I can just give my good friend Epsilon, who always exactly does what he is told, a gun and tell him to shoot me if I lie, then tell Omega I’m going to one-box, and then Omega would make its prediction. I would one-box, get $1,000,000 and, more importantly, not shot.
This is a decision that CDT would make, given the opportunity.
I agree that meta-Newcomb is not the same problem, and that in meta-newcomb CDT would precommit to one-box.
However, even in normal Newcomb, it’s possible to have agents that behave as if they had precommited when they realize precomitting would have been better for them. More specifically, in pseudocode:
There are some subtleties, notably about how to take the information about the world into account, but an agent built along this model should one-box on problems like Newcomb’s, while two-boxing in cases where Omega decides by flipping a coin.
(such an agent; however, doesn’t cooperate with itself in prisonner’s dilemma, you need a better agent for that)
You are 100% correct. However, if you say “it’s possible to have agents that behave as if they had precommited”, then you are not talking about what’s the best decision to make in this situation, but what’s the best decision theory to have in this situation, and that is, again, meta-Newcomb, because the decision which decision theory you’re going to follow is a decision you have to make before Omega makes its prediction. Switching to this decision theory after Omega makes its prediction doesn’t work, obviously, so this is not a solution for Newcomb.
When I first read this I took it literally, as using Epsilon directly as a lie detector. That had some interesting potential side effects (like death) for a CDT agent. On second reading I take it to mean “Stay around with the gun until after everything is resolved and if I forswear myself kill me”. As a CDT agent you need to be sure that Epsilon will stay with the gun until you have abandoned the second box. If Epsilon just scans your thoughts, detects whether you are lying and then leaves then CDT will go ahead and take both boxes anyway. (It’s mind-boggling to think of agents that couldn’t even manage cooperation with themselves with $1m on the line and a truth oracle right there to help them!)
Yeah, I meant that Epsilon would shoot if you two-box after having said you would one-box. In the end, “Epsilon with a gun” is just a metaphor for / specific instance of precommitting, as is “computer program that can choose its programming”.