It seems to me that the world-program is part of the problem description, not the analysis. It’s equally tricky whether it’s given in English or in a computer program; Wei Dai just translated it faithfully, preserving the strange properties it had to begin with.
My concern is that there may be several world-programs that correspond faithfully to a given problem description, but that correspond to different analyses, yielding different decision prescriptions, as illustrated by the P1 example above. (Upon further consideration, I should probably modify P1 to include “S()=S1()” as an additional input to S and to Omega_Predict, duly reflecting that aspect of the problem description.)
If there are multiple translations, then either the translations are all mathematically equivalent, in the sense that they agree on the output for every combination of inputs, or the problem is underspecified. (This seems like it ought to be the definition for the word underspecified. It’s also worth noting that all game-theory problems are underspecified in this sense, since they contain an opponent you know little about.)
Now, if two world programs were mathematically equivalent but a decision theory gave them different answers, then that would be a serious problem with the decision theory. And this does, in fact, happen with some decision theories; in particular, it happens to theories that work by trying to decompose the world program into parts, when those parts are related in a way that the decision theory doesn’t know how to handle. If you treat the world-program as an opaque object, though, then all mathematically equivalent formulations of it should give the same answer.
I assume (please correct me if I’m mistaken) that you’re referring to the payout-value as the output of the world program. In that case, a P-style program and a P1-style program can certainly give different outputs for some hypothetical outputs of S (for the given inputs). However, both programs’s payout-outputs will be the same for whatever turns out to be the actual output of S (for the given inputs).
P and P1 have the same causal structure. And they have the same output with regard to (whatever is) the actual output of S (for the given inputs). But P and P1 differ counterfactually as to what the payout-output would be if the output of S (for the given inputs) were different than whatever it actually is.
So I guess you could say that what’s unspecified are the counterfactual consequences of a hypothetical decision, given the (fully specified) physical structure of the scenario. But figuring out the counterfactual consequences of a decision is the main thing that the decision theory itself is supposed to do for us; that’s what the whole Newcomb/Prisoner controversy boils down to. So I think it’s the solution that’s underspecified here, not the problem itself. We need a theory that takes the physical structure of the scenario as input, and generates counterfactual consequences (of hypothetical decisions) as outputs.
PS: To make P and P1 fully comparable, drop the “E*1e9” terms in P, so that both programs model the conventional transparent-boxes problem without an extraneous pi-preference payout.
This conversation is a bit confused. Looking back, P and P1 aren’t the same at all; P1 corresponds to the case where Omega never asks you for any decision at all! If S must be equal to S1 and S1 is part of the world program, then S must be part of the world program, too, not chosen by the player. If choosing an S such that S!=S1 is allowed, then it corresponds to the case where Omega simulates someone else (not specified).
The root of the confusion seems to be that Wei Dai wrote “def P(i): …”, when he should have written “def P(S): …”, since S is what the player gets to control. I’m not sure where making i a parameter to P came from, since the English description of the problem had i as part of the world-program, not a parameter to it.
It seems to me that the world-program is part of the problem description, not the analysis. It’s equally tricky whether it’s given in English or in a computer program; Wei Dai just translated it faithfully, preserving the strange properties it had to begin with.
My concern is that there may be several world-programs that correspond faithfully to a given problem description, but that correspond to different analyses, yielding different decision prescriptions, as illustrated by the P1 example above. (Upon further consideration, I should probably modify P1 to include “S()=S1()” as an additional input to S and to Omega_Predict, duly reflecting that aspect of the problem description.)
If there are multiple translations, then either the translations are all mathematically equivalent, in the sense that they agree on the output for every combination of inputs, or the problem is underspecified. (This seems like it ought to be the definition for the word underspecified. It’s also worth noting that all game-theory problems are underspecified in this sense, since they contain an opponent you know little about.)
Now, if two world programs were mathematically equivalent but a decision theory gave them different answers, then that would be a serious problem with the decision theory. And this does, in fact, happen with some decision theories; in particular, it happens to theories that work by trying to decompose the world program into parts, when those parts are related in a way that the decision theory doesn’t know how to handle. If you treat the world-program as an opaque object, though, then all mathematically equivalent formulations of it should give the same answer.
I assume (please correct me if I’m mistaken) that you’re referring to the payout-value as the output of the world program. In that case, a P-style program and a P1-style program can certainly give different outputs for some hypothetical outputs of S (for the given inputs). However, both programs’s payout-outputs will be the same for whatever turns out to be the actual output of S (for the given inputs).
P and P1 have the same causal structure. And they have the same output with regard to (whatever is) the actual output of S (for the given inputs). But P and P1 differ counterfactually as to what the payout-output would be if the output of S (for the given inputs) were different than whatever it actually is.
So I guess you could say that what’s unspecified are the counterfactual consequences of a hypothetical decision, given the (fully specified) physical structure of the scenario. But figuring out the counterfactual consequences of a decision is the main thing that the decision theory itself is supposed to do for us; that’s what the whole Newcomb/Prisoner controversy boils down to. So I think it’s the solution that’s underspecified here, not the problem itself. We need a theory that takes the physical structure of the scenario as input, and generates counterfactual consequences (of hypothetical decisions) as outputs.
PS: To make P and P1 fully comparable, drop the “E*1e9” terms in P, so that both programs model the conventional transparent-boxes problem without an extraneous pi-preference payout.
This conversation is a bit confused. Looking back, P and P1 aren’t the same at all; P1 corresponds to the case where Omega never asks you for any decision at all! If S must be equal to S1 and S1 is part of the world program, then S must be part of the world program, too, not chosen by the player. If choosing an S such that S!=S1 is allowed, then it corresponds to the case where Omega simulates someone else (not specified).
The root of the confusion seems to be that Wei Dai wrote “def P(i): …”, when he should have written “def P(S): …”, since S is what the player gets to control. I’m not sure where making i a parameter to P came from, since the English description of the problem had i as part of the world-program, not a parameter to it.