Which strategy is best does not depend on what any given agent decides the ideal strategy is.
I’m assuming only that both the human player and Omega are capable of considering a total of six strategies for a simple payoff matrix and determining which ones are best. In particular, I’m calling Löb’shit on the line of thought “If I can prove that it is best to cooperate, other actors will concur that it is best to cooperate” when used as part of the proof that cooperation is best.
I’m using TDT instead of CDT because I wish to refuse to allow precommitment to become necessary or beneficial, and CDT has trouble explaining why to one-box if the boxes are transparent.
Which strategy is best does not depend on what any given agent decides the ideal strategy is.
I’m assuming only that both the human player and Omega are capable of considering a total of six strategies for a simple payoff matrix and determining which ones are best. In particular, I’m calling Löb’shit on the line of thought “If I can prove that it is best to cooperate, other actors will concur that it is best to cooperate” when used as part of the proof that cooperation is best.
I’m using TDT instead of CDT because I wish to refuse to allow precommitment to become necessary or beneficial, and CDT has trouble explaining why to one-box if the boxes are transparent.