Interesting. I have a better grasp of what you’re saying now (or maybe not what you’re saying, but why someone might think that what you are saying is true). Rapid responses to information that needs digesting are unhelpful so I have nothing further to say for now (though I still think my original post goes some way to explaining the opinions of those on LW that haven’t thought in detail about decision theory: a focus on algorithm rather than decisions means that people think one-boxing is rational even if they don’t agree with your claims about focusing on logical rather than causal consequences [and for these people, the disagreement with CDT is only apparent]).
ETA: On the CDT bit, which I can comment on, I think you overstate how “increasingly contorted” the CDTers “redefinitions of winning” are. They focus on whether the decision has the best causal consequences. This is hardly contorted (it’s fairly straightforward) and doesn’t seem to be much of a redefinition: if you’re focusing on “winning decisions” as the CDTer does (rather than “winning agents”) it seems to me that the causal consequences are the most natural way of separating out the part of the agent’s winning relates to the decision from the parts that relate to the agent more generally. As a definition of a winning decision, I think the definition used on LW is more revisionary than the CDTers definition (as a definition of winning algorithm or agent, the definition on LW seems natural but as a way of separating out the part of the agent’s winning that relate to the decision, logical consequences seems far more revisionary). In other words, everyone agrees what winning means. What people disagree about is when we can attribute the winningness to the decision rather than to some other factor and I think the CDTer takes the natural line here (which isn’t to say they’re right but I think the accusations of “contorted” definitions are unreasonable).
If agents whose decision-type is always the decision with the best physical consequences ignoring logical consequences, don’t end up rich, then it seems to me to require a good deal of contortion to redefine the “winning decision” as “the decision with the best physical consequences”, and in particular you must suppose that Omega is unfairly punishing rationalists even though Omega has no care for your algorithm apart from the decision it outputs, etc. I think that to believe that the Prisoner’s Dilemma against your clone or Parfit’s Hitchhiker or voting are ‘unfair’ situations requires explicit philosophical training, and most naive respondents would just think that the winning decision was the one corresponding to the giant heap of money on a problem where the scenario doesn’t care about your algorithm apart from its output.
To clarify: everyone should agree that the winning agent is the one with the giant heap of money on the table. The question is how we attribute parts of that winning to the decision rather than other aspects of the agent (because this is the game the CDTers are playing and you said you think they are playing the game wrong, not just playing the wrong game). CDTers use the following means to attribute winning to the decision: they attribute the winning that is caused by the decision. This may be wrong and there may be room to demonstrate that this is the case but it seems unreasonable to me to describe it as “contorted” (it’s actually quite a straightforward way to attribute the winning to the decision) and I think that using such descriptions skews the debate in an unreasonable way. This is basically just a repetition of my previous point so perhaps further reiteration is not of any use to either of us...
In terms of NP being “unfair”, we need to be clear about what the CDTer means by this (using the word “unfair” makes it sound like the CDTer is just closing their eyes and crying). On the basic level, though, the CDTer simply mean that the agent’s winning in this case isn’t entirely determined by the winning that can be attributed to the decision and hence that the agent’s winning is not a good guide to what decision wins. More specifically, the claim is that the agent’s winning is determined in part by things that are correlated with the agent’s decision but which aren’t attributable to the agent’s decision and so the agent’s overall winning in this case is a bad guide to determining which decision wins. Obviously you would disagree with the claims they’re making but this is different to claiming that CDTers think NP is unfair in some more everyday sense (where it seems absurd to think that Omega is being unfair because Omega cares only about what decision you are going to make).
I don’t necessarily think the CDTers are right but I don’t think the way you outline their views does justice to them.
So to summarise. On LW the story is often told as follows: CDTers don’t care about winning (at least not in any natural sense) and they avoid the problems raised by NP by saying the scenario is unfair. This makes the CDTer sound not just wrong but also so foolish it’s hard to understand why the CDTer exists.
But expanded to show what the CDT actually means, this becomes: CDTers agree that winning is what matters to rationality but because they’re interested in rational decisions they are interested in what winning can be attributed to decisions. Specifically, they say that winning can be attributed to a decision if it was caused by that decision. In response to NP, the CDTer notes that the agent’s overall winning is not a good guide to the winning decision as in this case, the agent’s winning it also determined by factors other than their decisions (that is, the winning cannot be attributed to the agent’s decision). Further, because the agent’s winnings correlate with their decisions, even though it can’t be attributed to their decisions, the case can be particularly misleading when trying to determine the winning decisions.
Now this second view may be both false and may be playing the wrong game but it at least gives the CDTer a fair hearing in a way that the first view doesn’t.
Interesting. I have a better grasp of what you’re saying now (or maybe not what you’re saying, but why someone might think that what you are saying is true). Rapid responses to information that needs digesting are unhelpful so I have nothing further to say for now (though I still think my original post goes some way to explaining the opinions of those on LW that haven’t thought in detail about decision theory: a focus on algorithm rather than decisions means that people think one-boxing is rational even if they don’t agree with your claims about focusing on logical rather than causal consequences [and for these people, the disagreement with CDT is only apparent]).
ETA: On the CDT bit, which I can comment on, I think you overstate how “increasingly contorted” the CDTers “redefinitions of winning” are. They focus on whether the decision has the best causal consequences. This is hardly contorted (it’s fairly straightforward) and doesn’t seem to be much of a redefinition: if you’re focusing on “winning decisions” as the CDTer does (rather than “winning agents”) it seems to me that the causal consequences are the most natural way of separating out the part of the agent’s winning relates to the decision from the parts that relate to the agent more generally. As a definition of a winning decision, I think the definition used on LW is more revisionary than the CDTers definition (as a definition of winning algorithm or agent, the definition on LW seems natural but as a way of separating out the part of the agent’s winning that relate to the decision, logical consequences seems far more revisionary). In other words, everyone agrees what winning means. What people disagree about is when we can attribute the winningness to the decision rather than to some other factor and I think the CDTer takes the natural line here (which isn’t to say they’re right but I think the accusations of “contorted” definitions are unreasonable).
If agents whose decision-type is always the decision with the best physical consequences ignoring logical consequences, don’t end up rich, then it seems to me to require a good deal of contortion to redefine the “winning decision” as “the decision with the best physical consequences”, and in particular you must suppose that Omega is unfairly punishing rationalists even though Omega has no care for your algorithm apart from the decision it outputs, etc. I think that to believe that the Prisoner’s Dilemma against your clone or Parfit’s Hitchhiker or voting are ‘unfair’ situations requires explicit philosophical training, and most naive respondents would just think that the winning decision was the one corresponding to the giant heap of money on a problem where the scenario doesn’t care about your algorithm apart from its output.
To clarify: everyone should agree that the winning agent is the one with the giant heap of money on the table. The question is how we attribute parts of that winning to the decision rather than other aspects of the agent (because this is the game the CDTers are playing and you said you think they are playing the game wrong, not just playing the wrong game). CDTers use the following means to attribute winning to the decision: they attribute the winning that is caused by the decision. This may be wrong and there may be room to demonstrate that this is the case but it seems unreasonable to me to describe it as “contorted” (it’s actually quite a straightforward way to attribute the winning to the decision) and I think that using such descriptions skews the debate in an unreasonable way. This is basically just a repetition of my previous point so perhaps further reiteration is not of any use to either of us...
In terms of NP being “unfair”, we need to be clear about what the CDTer means by this (using the word “unfair” makes it sound like the CDTer is just closing their eyes and crying). On the basic level, though, the CDTer simply mean that the agent’s winning in this case isn’t entirely determined by the winning that can be attributed to the decision and hence that the agent’s winning is not a good guide to what decision wins. More specifically, the claim is that the agent’s winning is determined in part by things that are correlated with the agent’s decision but which aren’t attributable to the agent’s decision and so the agent’s overall winning in this case is a bad guide to determining which decision wins. Obviously you would disagree with the claims they’re making but this is different to claiming that CDTers think NP is unfair in some more everyday sense (where it seems absurd to think that Omega is being unfair because Omega cares only about what decision you are going to make).
I don’t necessarily think the CDTers are right but I don’t think the way you outline their views does justice to them.
So to summarise. On LW the story is often told as follows: CDTers don’t care about winning (at least not in any natural sense) and they avoid the problems raised by NP by saying the scenario is unfair. This makes the CDTer sound not just wrong but also so foolish it’s hard to understand why the CDTer exists.
But expanded to show what the CDT actually means, this becomes: CDTers agree that winning is what matters to rationality but because they’re interested in rational decisions they are interested in what winning can be attributed to decisions. Specifically, they say that winning can be attributed to a decision if it was caused by that decision. In response to NP, the CDTer notes that the agent’s overall winning is not a good guide to the winning decision as in this case, the agent’s winning it also determined by factors other than their decisions (that is, the winning cannot be attributed to the agent’s decision). Further, because the agent’s winnings correlate with their decisions, even though it can’t be attributed to their decisions, the case can be particularly misleading when trying to determine the winning decisions.
Now this second view may be both false and may be playing the wrong game but it at least gives the CDTer a fair hearing in a way that the first view doesn’t.