I assume (please correct me if I’m mistaken) that you’re referring to the payout-value as the output of the world program. In that case, a P-style program and a P1-style program can certainly give different outputs for some hypothetical outputs of S (for the given inputs). However, both programs’s payout-outputs will be the same for whatever turns out to be the actual output of S (for the given inputs).
P and P1 have the same causal structure. And they have the same output with regard to (whatever is) the actual output of S (for the given inputs). But P and P1 differ counterfactually as to what the payout-output would be if the output of S (for the given inputs) were different than whatever it actually is.
So I guess you could say that what’s unspecified are the counterfactual consequences of a hypothetical decision, given the (fully specified) physical structure of the scenario. But figuring out the counterfactual consequences of a decision is the main thing that the decision theory itself is supposed to do for us; that’s what the whole Newcomb/Prisoner controversy boils down to. So I think it’s the solution that’s underspecified here, not the problem itself. We need a theory that takes the physical structure of the scenario as input, and generates counterfactual consequences (of hypothetical decisions) as outputs.
PS: To make P and P1 fully comparable, drop the “E*1e9” terms in P, so that both programs model the conventional transparent-boxes problem without an extraneous pi-preference payout.
This conversation is a bit confused. Looking back, P and P1 aren’t the same at all; P1 corresponds to the case where Omega never asks you for any decision at all! If S must be equal to S1 and S1 is part of the world program, then S must be part of the world program, too, not chosen by the player. If choosing an S such that S!=S1 is allowed, then it corresponds to the case where Omega simulates someone else (not specified).
The root of the confusion seems to be that Wei Dai wrote “def P(i): …”, when he should have written “def P(S): …”, since S is what the player gets to control. I’m not sure where making i a parameter to P came from, since the English description of the problem had i as part of the world-program, not a parameter to it.
I assume (please correct me if I’m mistaken) that you’re referring to the payout-value as the output of the world program. In that case, a P-style program and a P1-style program can certainly give different outputs for some hypothetical outputs of S (for the given inputs). However, both programs’s payout-outputs will be the same for whatever turns out to be the actual output of S (for the given inputs).
P and P1 have the same causal structure. And they have the same output with regard to (whatever is) the actual output of S (for the given inputs). But P and P1 differ counterfactually as to what the payout-output would be if the output of S (for the given inputs) were different than whatever it actually is.
So I guess you could say that what’s unspecified are the counterfactual consequences of a hypothetical decision, given the (fully specified) physical structure of the scenario. But figuring out the counterfactual consequences of a decision is the main thing that the decision theory itself is supposed to do for us; that’s what the whole Newcomb/Prisoner controversy boils down to. So I think it’s the solution that’s underspecified here, not the problem itself. We need a theory that takes the physical structure of the scenario as input, and generates counterfactual consequences (of hypothetical decisions) as outputs.
PS: To make P and P1 fully comparable, drop the “E*1e9” terms in P, so that both programs model the conventional transparent-boxes problem without an extraneous pi-preference payout.
This conversation is a bit confused. Looking back, P and P1 aren’t the same at all; P1 corresponds to the case where Omega never asks you for any decision at all! If S must be equal to S1 and S1 is part of the world program, then S must be part of the world program, too, not chosen by the player. If choosing an S such that S!=S1 is allowed, then it corresponds to the case where Omega simulates someone else (not specified).
The root of the confusion seems to be that Wei Dai wrote “def P(i): …”, when he should have written “def P(S): …”, since S is what the player gets to control. I’m not sure where making i a parameter to P came from, since the English description of the problem had i as part of the world-program, not a parameter to it.