So philosophical proponents of CDT will almost all (all, in my experience) agree that it is rational if choosing a decision theory to follow to choose a one-boxing decision theory but they will say that it is rational if choosing a decision to two-box.
How can one simultaneously
consider it rational, when choosing a decision theory, to pick one that tells you to one-box; and
be a proponent of CDT, a decision theory that tells you to two-box?
It seems to me that this is possible only for those who (1) actually think one can’t or shouldn’t choose a decision theory (c.f. some responses to Pascal’s wager) and/or (2) think it reasonable to be a proponent of a theory it would be irrational to choose. Those both seem a bit odd.
[EDITED to replace some “you”s with “one”s and similar locutions, to clarify that I’m not accusing PhilosophyStudent of being in that position.]
We need to distinguish two meanings of “being a proponent of CDT”. If by “be a proponent of CDT” we mean, “think CDT describes the rational decision” then the answer is simply that the CDTer thinks that rational decisions relate to the causal impact of decisions and rational algorithms relate to the causal impact of algorithms and so there’s no reason to think that the rational decision must be endorsed by the rational algorithm (as we are considering different causal impacts in the two cases).
If by “be a proponent of CDT” we mean “think we should decide according to CDT in all scenarios including NP” then we definitely have a problem but no smart person should be a proponent of CDT in this way (all CDTers should have decided to become one-boxers if they have the capacity to do so because CDT itself entails that this is the best decision)
there’s no reason to think that the rational decision must be endorsed by the rational algorithm (as we are considering different causal impacts in the two cases).
You can describe things this way. This description in hand, what does one do if dropped into NP (the scan has already been made, the boxes filled or not)? Go with the action dictated by algorithm and collect the million, or the lone action and collect the thousand?
(all CDTers should have decided to become one-boxers if they have the capacity to do so because CDT itself entails that this is the best decision)
Are you thinking of something like hiring a hitman to shoot you unless you one-box, so that the payoffs don’t match NP? Or of changing your beliefs about what you should do in NP?
For the former, convenient ways of avoiding the problem aren’t necessarily available, and one can ask why the paraphernalia are needed when no one is stopping you from just one-boxing. For the latter, I’d need a bit more clarification.
This comment was only meant to suggest how it was internally consistent for a CDTer to:
consider it rational, when choosing a decision theory, to pick one that tells you to one-box; and
be a proponent of CDT, a decision theory that tells you to two-box?
In other words, I was not trying here to offer a defence of a view (or even an outline of my view) but merely to show why it is that the CDTer can hold both of these things without inconsistency.
Are you thinking of something like hiring a hitman to shoot you unless you one-box, so that the payoffs don’t match NP? Or of changing your beliefs about what you should do in NP?
I’m thinking about changing your dispositions to decide. How one might do that will depend on their capabilities (for myself, I have some capacity to resolutely commit to later actions without changing my beliefs about the rationality of that decision). For some agents, this may well not be possible.
This comment was only meant to suggest how it was internally consistent for a CDTer to: consider it rational, when choosing a decision theory, to pick one that tells you to one-box; and be a proponent of CDT, a decision theory that tells you to two-box?
You didn’t, quite. CDT favors modifying to one-box on all problems where there is causal influence from your physical decision to make the change. So it favors one-boxing on Newcomb with a Predictor who predicts by scanning you after the change, but two-boxing with respect to earlier causal entanglements, or logical/algorithmic similarities. In the terminology of this post CDT (counterfactuals over acts) attempts to replace itself with counterfactuals over earlier innards at the time of replacement, not counterfactuals over algorithms.
OK, that’s all good, but already part of the standard picture and leaves almost all the arguments intact over cases one didn’t get to precommit for, which is the standard presentation in any case. So I’d say it doesn’t much support the earlier claim:
For those that haven’t, I suspect that the “disagreement” with philosophers is mostly apparent and not actual
Nevertheless, I do think that people on LW who haven’t thought about the issues a lot might well not have a solid enough opinion to be either agreeing or disagreeing with the LW one-boxing view or the two-boxing philosopher’s view. I suspect some of these people just note that one-boxing is the best algorithm and think that this means that they’re agreeing with LW when in fact this leaves them neutral on the issue until they make their claim more precise.
I also think one of the reasons for the lack of two-boxers on LW is that LW often presents two-boxing arguments in a slogan form which fails to do justice to these arguments (see my comments here and here). Which isn’t to say that the two-boxers are right but is to say I think the debate gets skewed unreasonably in one-boxers’ favour on LW (not always, but often enough to influence people’s opinions).
How can one simultaneously
consider it rational, when choosing a decision theory, to pick one that tells you to one-box; and
be a proponent of CDT, a decision theory that tells you to two-box?
It seems to me that this is possible only for those who (1) actually think one can’t or shouldn’t choose a decision theory (c.f. some responses to Pascal’s wager) and/or (2) think it reasonable to be a proponent of a theory it would be irrational to choose. Those both seem a bit odd.
[EDITED to replace some “you”s with “one”s and similar locutions, to clarify that I’m not accusing PhilosophyStudent of being in that position.]
We need to distinguish two meanings of “being a proponent of CDT”. If by “be a proponent of CDT” we mean, “think CDT describes the rational decision” then the answer is simply that the CDTer thinks that rational decisions relate to the causal impact of decisions and rational algorithms relate to the causal impact of algorithms and so there’s no reason to think that the rational decision must be endorsed by the rational algorithm (as we are considering different causal impacts in the two cases).
If by “be a proponent of CDT” we mean “think we should decide according to CDT in all scenarios including NP” then we definitely have a problem but no smart person should be a proponent of CDT in this way (all CDTers should have decided to become one-boxers if they have the capacity to do so because CDT itself entails that this is the best decision)
I think this elides distinctions too quickly.
You can describe things this way. This description in hand, what does one do if dropped into NP (the scan has already been made, the boxes filled or not)? Go with the action dictated by algorithm and collect the million, or the lone action and collect the thousand?
Are you thinking of something like hiring a hitman to shoot you unless you one-box, so that the payoffs don’t match NP? Or of changing your beliefs about what you should do in NP?
For the former, convenient ways of avoiding the problem aren’t necessarily available, and one can ask why the paraphernalia are needed when no one is stopping you from just one-boxing. For the latter, I’d need a bit more clarification.
This comment was only meant to suggest how it was internally consistent for a CDTer to:
In other words, I was not trying here to offer a defence of a view (or even an outline of my view) but merely to show why it is that the CDTer can hold both of these things without inconsistency.
I’m thinking about changing your dispositions to decide. How one might do that will depend on their capabilities (for myself, I have some capacity to resolutely commit to later actions without changing my beliefs about the rationality of that decision). For some agents, this may well not be possible.
You didn’t, quite. CDT favors modifying to one-box on all problems where there is causal influence from your physical decision to make the change. So it favors one-boxing on Newcomb with a Predictor who predicts by scanning you after the change, but two-boxing with respect to earlier causal entanglements, or logical/algorithmic similarities. In the terminology of this post CDT (counterfactuals over acts) attempts to replace itself with counterfactuals over earlier innards at the time of replacement, not counterfactuals over algorithms.
Yes. So it is consistent for a CDTer to believe that:
(1) When picking a decision theory, you should pick one that tells you to one-box in instances of NP where the prediction has not yet occurred; and
(2) CDT correctly describes two-boxing as the rational decision in NP.
I committed the sin of brevity in order to save time (LW is kind of a guilty pleasure rather than something I actually have the time to be doing).
OK, that’s all good, but already part of the standard picture and leaves almost all the arguments intact over cases one didn’t get to precommit for, which is the standard presentation in any case. So I’d say it doesn’t much support the earlier claim:
Also:
No pressure.
Perhaps my earlier claim was too strong.
Nevertheless, I do think that people on LW who haven’t thought about the issues a lot might well not have a solid enough opinion to be either agreeing or disagreeing with the LW one-boxing view or the two-boxing philosopher’s view. I suspect some of these people just note that one-boxing is the best algorithm and think that this means that they’re agreeing with LW when in fact this leaves them neutral on the issue until they make their claim more precise.
I also think one of the reasons for the lack of two-boxers on LW is that LW often presents two-boxing arguments in a slogan form which fails to do justice to these arguments (see my comments here and here). Which isn’t to say that the two-boxers are right but is to say I think the debate gets skewed unreasonably in one-boxers’ favour on LW (not always, but often enough to influence people’s opinions).