Again, you don’t sound like you’ve read this post here. Let’s say that, in fact, “it would be better for player 2 if player 2 did not have the option to defect if player 1 cooperated”—though I’m not at all sure of that, when player 2 is Omega—and let’s say Omega uses TDT. Then it will ask counterfactual questions about what “would” happen if Omega’s own abstract decision procedure gave various answers. Because of the nature of the counterfactuals, these will screen off any actions by player 1 that depend on said answers, even ‘known’ actions.
You’re postulating away the hard part, namely the question of whether the human player’s actions depend on Omega’s real thought processes or if Omega can just fool us!
Which strategy is best does not depend on what any given agent decides the ideal strategy is.
I’m assuming only that both the human player and Omega are capable of considering a total of six strategies for a simple payoff matrix and determining which ones are best. In particular, I’m calling Löb’shit on the line of thought “If I can prove that it is best to cooperate, other actors will concur that it is best to cooperate” when used as part of the proof that cooperation is best.
I’m using TDT instead of CDT because I wish to refuse to allow precommitment to become necessary or beneficial, and CDT has trouble explaining why to one-box if the boxes are transparent.
Again, you don’t sound like you’ve read this post here. Let’s say that, in fact, “it would be better for player 2 if player 2 did not have the option to defect if player 1 cooperated”—though I’m not at all sure of that, when player 2 is Omega—and let’s say Omega uses TDT. Then it will ask counterfactual questions about what “would” happen if Omega’s own abstract decision procedure gave various answers. Because of the nature of the counterfactuals, these will screen off any actions by player 1 that depend on said answers, even ‘known’ actions.
You’re postulating away the hard part, namely the question of whether the human player’s actions depend on Omega’s real thought processes or if Omega can just fool us!
Which strategy is best does not depend on what any given agent decides the ideal strategy is.
I’m assuming only that both the human player and Omega are capable of considering a total of six strategies for a simple payoff matrix and determining which ones are best. In particular, I’m calling Löb’shit on the line of thought “If I can prove that it is best to cooperate, other actors will concur that it is best to cooperate” when used as part of the proof that cooperation is best.
I’m using TDT instead of CDT because I wish to refuse to allow precommitment to become necessary or beneficial, and CDT has trouble explaining why to one-box if the boxes are transparent.