Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
In both cases, while computing them you never assume anything which you know to be false, whereas Kant is not like that. (Just realised, I’m not sure this is right).
In both cases, while computing them you never assume anything which you know to be false
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen. Omega’s coin didn’t come up heads, and your friend has been kidnapped. Nevertheless you need to consider the consequences of your policy in those counterfactual situations.
I think counterfactual mugging was originally brought up in the context of problems which TDT doesn’t solve, that is it gives the obvious but non-optimal answer. The reason is that regardless of my counterfactual decision Omega still flips the same outcome and still doesn’t pay.
Well that might explain some of our miscommunication. I’ll go back and check.
Consequences” only in a counterfactual world. . I don’t see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system.
This makes sense using the first definition, at least, according to TDT it does.
Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
This is clearly using the first definition.
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
What I meant by that statement was the idea that CDT works by basing counterfactuals on your action, which seems a reasonable basis for counterfactuals since prior to making your decision you obviously don’t know what your action will be. TDT similarly works by basing counterfactuals on your decision, which you also don’t know prior to making it.
Kant, on the other hand, bases his counter-factuals on what would happen if everyone did that, and it is possible that his will involve assuming things I know to be false in a sense that CDT and TDT don’t (e.g. when deciding whether to lie I evaluate possible worlds in which everyone lies and in which everyone tells the truth, both of which I know not to be the case).
Let’s say I have to decide what to do at 2′o’clock tomorrow. If I light a stick of dynamite, I will be exploded. If I don’t, then I won’t. I can predict that I will, in fact, not light a stick of dynamite tomorrow. I will then know that one of my counterfactuals is true and one is false.
I’m not sure I agree with myself. I think my analysis makes sense for the way TDT handles Newcomb’s problem or Prisoner’s dilemma, but it breaks down for Transparent Newcomb or Parfit’s Hitch-hiker. In those cases, owing to the assistance of a predictor, it seems like it is actually possible to know your decision in advance of making it.
Well you always know that one of your counterfactuals is true.
There is no need to make that assumption. The whole collection of possible decisions could be located on an impossible counterfactual. Incidentally, this is one way of making sense of Transparent Newcomb.
Would you ever actually be in a situation where you chose an action tied to an impossible counterfactual? Wouldn’t that represent a failure of Omega’s prediction?
It matters what you do when you are in an actually impossible counterfactual, because when earlier you decide what decision theory you’d be using in that counterfactual, you might yet not know that it is impossible, and so you need to precommit to act sensibly even in the situation that doesn’t actually exist (not that you would know that if you get in that situation). Seriously. And sometimes you take an action that determines the fact that you don’t exist, which you can easily obtain in a variation on Transparent Newcomb.
When you make the precommitment-to-business-as-usual conversion, you get a principle that decision theory shouldn’t care about whether the agent “actually exists”, and focus on what it knows instead.
All I’m saying is that when you actually make choices in reality, the counterfactual you end up using will happen. When a real Kant-Decision-Theory user makes choices, his favorite counterfactual will fail to actually occur.
You could possibly fix that by saying Omega isn’t perfect, but his predictions are correlated enough with your decision to make precomittment possible.
Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
In both cases, while computing them you never assume anything which you know to be false, whereas Kant is not like that. (Just realised, I’m not sure this is right).
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen. Omega’s coin didn’t come up heads, and your friend has been kidnapped. Nevertheless you need to consider the consequences of your policy in those counterfactual situations.
I think counterfactual mugging was originally brought up in the context of problems which TDT doesn’t solve, that is it gives the obvious but non-optimal answer. The reason is that regardless of my counterfactual decision Omega still flips the same outcome and still doesn’t pay.
There are two rather different things both going under the name counterfactuals.
One is when I think of what the world would be like if I did something that I’m not going to do.
Another is when I think of what the world would be like if something not under my control had happened differently, and how my actions affect that.
They’re almost orthogonal, so I question the utility of using the same word.
Well, I’ve been consistently using the word “conterfactual” in your second sense.
Well that might explain some of our miscommunication. I’ll go back and check.
This makes sense using the first definition, at least, according to TDT it does.
This is clearly using the first definition.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
Sorry, I meant something closer to UDT.
Alright cool. So I think that’s what’s going on—we all agree but were using different definitions of counterfactuals.
You need a proof-system to ensure that you never assume anything which you know to be false.
ADT and some related theories have achieved this. I don’t think TDT has.
What I meant by that statement was the idea that CDT works by basing counterfactuals on your action, which seems a reasonable basis for counterfactuals since prior to making your decision you obviously don’t know what your action will be. TDT similarly works by basing counterfactuals on your decision, which you also don’t know prior to making it.
Kant, on the other hand, bases his counter-factuals on what would happen if everyone did that, and it is possible that his will involve assuming things I know to be false in a sense that CDT and TDT don’t (e.g. when deciding whether to lie I evaluate possible worlds in which everyone lies and in which everyone tells the truth, both of which I know not to be the case).
Well here is the issue.
Let’s say I have to decide what to do at 2′o’clock tomorrow. If I light a stick of dynamite, I will be exploded. If I don’t, then I won’t. I can predict that I will, in fact, not light a stick of dynamite tomorrow. I will then know that one of my counterfactuals is true and one is false.
This can mess up the logic of decision-making. There are http://lesswrong.com/lw/2l2/what_a_reduction_of_could_could_look_like/. This ensures that you can never figure out a decision before making it, which makes things simpler.
I’m not sure if this contradicts what you’ve said.
And I would agree exactly with your analysis about what’s wrong with Kant, and how that’s different from CDT and TDT.
I’m not sure I agree with myself. I think my analysis makes sense for the way TDT handles Newcomb’s problem or Prisoner’s dilemma, but it breaks down for Transparent Newcomb or Parfit’s Hitch-hiker. In those cases, owing to the assistance of a predictor, it seems like it is actually possible to know your decision in advance of making it.
Well you always know that one of your counterfactuals is true.
and Transparent Newcomb is a bit weird because one of the four possible strategies just explodes it.
There is no need to make that assumption. The whole collection of possible decisions could be located on an impossible counterfactual. Incidentally, this is one way of making sense of Transparent Newcomb.
Would you ever actually be in a situation where you chose an action tied to an impossible counterfactual? Wouldn’t that represent a failure of Omega’s prediction?
And since you always choose an action...
It matters what you do when you are in an actually impossible counterfactual, because when earlier you decide what decision theory you’d be using in that counterfactual, you might yet not know that it is impossible, and so you need to precommit to act sensibly even in the situation that doesn’t actually exist (not that you would know that if you get in that situation). Seriously. And sometimes you take an action that determines the fact that you don’t exist, which you can easily obtain in a variation on Transparent Newcomb.
When you make the precommitment-to-business-as-usual conversion, you get a principle that decision theory shouldn’t care about whether the agent “actually exists”, and focus on what it knows instead.
Yes. The actually impossible counterfactuals matter. All I’m saying is that the possible counterfactuals exist.
If you took such an action, wouldn’t you not exist? I request elaboration.
(You’ve probably misunderstood, I edited for clarity; will probably reply later, if that is not an actually impossible event.)
New reply: Yes, I agree.
All I’m saying is that when you actually make choices in reality, the counterfactual you end up using will happen. When a real Kant-Decision-Theory user makes choices, his favorite counterfactual will fail to actually occur.
You could possibly fix that by saying Omega isn’t perfect, but his predictions are correlated enough with your decision to make precomittment possible.