I’m rereading past discussions to find insights. This jumped out at me:
Suppose Omega tells me that I make the same decision in the Prisoner’s Dilemma as Agent X. This does not necessarily imply that I should cooperate with Agent X.
I was referring to the example Eliezer gives with your opponent being a DefectBot, in which case cooperating makes Omega’s claim false, which may just mean that you’d make your branch of the thought experiment counterfactual, instead of convincing DefectBot to cooperate:
X is just a piece of paper with “Defect” written on it.
Winning is about how alternatives you choose between compare. By cooperating against a same-action DefectBot, you are choosing nonexistence over a (D,D), which is not obviously a neutral choice.
I don’t think this is how it works. Particular counterfactual instances of you can’t influence whether they are counterfactual or exist in some stronger sense. They can only choose whether there are more real instances with identical experiences (and their choices can sometimes acausally influence what happens with real instances, which doesn’t seem to be the case here since the real you will choose defect either way as predicted by Omega). Hypothetical instances don’t lose anything by being in the branch that chooses the opposite of what the real you chooses unless they value being identical to the real you, which IMO would be silly.
Particular counterfactual instances of you can’t influence whether they are counterfactual or exist in some stronger sense.
What can influence things like that? Whatever property of a situation can mark it as counterfactual (more precisely, given by a contradictory specification, or not following from a preceding construction, assumed-real past state for example), that property could as well be a decision made by an agent present in that situation. There is nothing special about agents or their decisions.
Why do you think something can influence it? Whether you choose to cooperate or defect, you can always ask both “what would happen if I cooperated?” and “what would happen if I defected?”. In as far as being counterfactual makes sense the alternative to being the answer to “what would happen if I cooperated?” is being the answer to “what would happen if I defected?”, even if you know that the real you defects.
Compare Omega telling you that your answer will be the the same as the Nth digit of Pi. That doesn’t you allow to choose the Nth digit of Pi.
Winning is about how alternatives you choose between compare. By cooperating against a same-action DefectBot, you are choosing nonexistence over a (D,D), which is not obviously a neutral choice.
This becomes a (relatively) straightforward matter of working out where the (potentially counterfactual—depending what you choose) calculation is being performed to determine exactly what this ‘nonexistence’ means. Since this particular thought experiment doesn’t seem to specify any other broader context I assert that cooperate is clearly the correct option. Any agent which doesn’t cooperate is broken.
Basically, if you ever find yourself in this situation then you don’t matter. It’s your job to play chicken with the universe and not exist so the actual you can win.
I don’t see this argument making sense. Omega’s claim reduces to neglibible chances that a choice of Defection will be advantageous for me, because Omega’s claim makes it of neglible probability that either (D,C) or (C, D) will be realized. So I can only choose between the worlds of (C, C) and (D, D). Which means that the Cooperation world is advantageous, and that I should Cooperate.
In contrast, if Omega had claimed that we’d make the opposite decisions, then I’d only have to choose between the worlds of (D, C) or (C, D) -- with the worlds of (C, C) and (D, D) now having negligible probability. In which case, I should, of course, Defect.
The reasons for the correlation between me and Agent X are irrelevant when the fact of their correlation is known.
Agent X is a piece of paper with “Defect” written on it.
Sorry, was this intended as part of the problem statement, like “Omega tells you that agent X is a DefectBot that will play the same as you”? If yes, then ok. But if we don’t know what agent X is, then I don’t understand why a DefectBot is apriori more probable than a CooperateBot. If they are equally probable, then it cancels out (edit: no it doesn’t, it actually makes cooperating a better choice, thx ArisKatsaris). And there’s also the case where X is a copy of you, where cooperating does help. So it seems to be a better choice overall.
I’m rereading past discussions to find insights. This jumped out at me:
Do you still believe this?
Playing chicken with Omega may result in you becoming counterfactual.
Why is cooperation more likely to qualify as “playing chicken” than defection here?
I was referring to the example Eliezer gives with your opponent being a DefectBot, in which case cooperating makes Omega’s claim false, which may just mean that you’d make your branch of the thought experiment counterfactual, instead of convincing DefectBot to cooperate:
So? That doesn’t hurt my utility in reality. I would cooperate because that wins if agent X is correlated with me, and doesn’t lose otherwise.
Winning is about how alternatives you choose between compare. By cooperating against a same-action DefectBot, you are choosing nonexistence over a (D,D), which is not obviously a neutral choice.
I don’t think this is how it works. Particular counterfactual instances of you can’t influence whether they are counterfactual or exist in some stronger sense. They can only choose whether there are more real instances with identical experiences (and their choices can sometimes acausally influence what happens with real instances, which doesn’t seem to be the case here since the real you will choose defect either way as predicted by Omega). Hypothetical instances don’t lose anything by being in the branch that chooses the opposite of what the real you chooses unless they value being identical to the real you, which IMO would be silly.
What can influence things like that? Whatever property of a situation can mark it as counterfactual (more precisely, given by a contradictory specification, or not following from a preceding construction, assumed-real past state for example), that property could as well be a decision made by an agent present in that situation. There is nothing special about agents or their decisions.
Why do you think something can influence it? Whether you choose to cooperate or defect, you can always ask both “what would happen if I cooperated?” and “what would happen if I defected?”. In as far as being counterfactual makes sense the alternative to being the answer to “what would happen if I cooperated?” is being the answer to “what would happen if I defected?”, even if you know that the real you defects.
Compare Omega telling you that your answer will be the the same as the Nth digit of Pi. That doesn’t you allow to choose the Nth digit of Pi.
This becomes a (relatively) straightforward matter of working out where the (potentially counterfactual—depending what you choose) calculation is being performed to determine exactly what this ‘nonexistence’ means. Since this particular thought experiment doesn’t seem to specify any other broader context I assert that cooperate is clearly the correct option. Any agent which doesn’t cooperate is broken.
Basically, if you ever find yourself in this situation then you don’t matter. It’s your job to play chicken with the universe and not exist so the actual you can win.
Agent X is a piece of paper with “Defect” written on it. I defect against it. Omega’s claim is true and does not imply that I should cooperate.
I don’t see this argument making sense. Omega’s claim reduces to neglibible chances that a choice of Defection will be advantageous for me, because Omega’s claim makes it of neglible probability that either (D,C) or (C, D) will be realized. So I can only choose between the worlds of (C, C) and (D, D). Which means that the Cooperation world is advantageous, and that I should Cooperate.
In contrast, if Omega had claimed that we’d make the opposite decisions, then I’d only have to choose between the worlds of (D, C) or (C, D) -- with the worlds of (C, C) and (D, D) now having negligible probability. In which case, I should, of course, Defect.
The reasons for the correlation between me and Agent X are irrelevant when the fact of their correlation is known.
Sorry, was this intended as part of the problem statement, like “Omega tells you that agent X is a DefectBot that will play the same as you”? If yes, then ok. But if we don’t know what agent X is, then I don’t understand why a DefectBot is apriori more probable than a CooperateBot. If they are equally probable, then it cancels out (edit: no it doesn’t, it actually makes cooperating a better choice, thx ArisKatsaris). And there’s also the case where X is a copy of you, where cooperating does help. So it seems to be a better choice overall.
There is also a case where X is an anticopy (performs opposite action), which argues for defecting in the same manner.
Edit: This reply is wrong.
No it doesn’t. If X is an anticopy, the situation can’t be real and your action doesn’t matter.
Why can’t it be real?
Because Omega has told you that X’s action is the same as yours.
OK.