It seems as if you currently believe that the correct solution for isolated Transparent Newcomb’s is one-boxing, but the correct solution in the context of the possibility of other problems is two-boxing. Is it so?
Yes.
I don’t think that the most advantageous solution for isolated Transparent Newcomb’s is likely to be a very useful question though.
I don’t think it’s possible to have a general case decision theory which gets the best possible results for every situation (see the Andy and Sandy example, where getting good results for one prisoner’s dilemma necessitates getting bad results from the other, so any decision theory wins in at most one of the two.)
That being the case, I don’t think that a goal of winning in Transparent Newcomb’s Problem is a very meaningful one for a decision theory. The way I see it, it seems like focusing on coming out ahead in Sandy prisoner’s dilemmas, while disregarding the relative likelihoods of ending up in a dilemma with Andy or Sandy, and assuming that if you ended up in an Andy prisoner dilemma you could use the same decision process to come out ahead in that too.
Yes.
I don’t think that the most advantageous solution for isolated Transparent Newcomb’s is likely to be a very useful question though.
I don’t think it’s possible to have a general case decision theory which gets the best possible results for every situation (see the Andy and Sandy example, where getting good results for one prisoner’s dilemma necessitates getting bad results from the other, so any decision theory wins in at most one of the two.)
That being the case, I don’t think that a goal of winning in Transparent Newcomb’s Problem is a very meaningful one for a decision theory. The way I see it, it seems like focusing on coming out ahead in Sandy prisoner’s dilemmas, while disregarding the relative likelihoods of ending up in a dilemma with Andy or Sandy, and assuming that if you ended up in an Andy prisoner dilemma you could use the same decision process to come out ahead in that too.