Sure. Thanks for pointing that out. Any acausal trade depends on precommitment, this is the only way an agreement can go across space-time, it is done on the game-theoretical possibilities space—as I am calling it. In the case I am discussing, a powerful agent would only have reason to even consider acausal trading with an agent if that agent can precommit. Otherwise, there is no other way of ensuring acausal cooperation. If the other agent cannot, beforehand, understand the due to the peculiarities of the set of possible strategies is better to always precommit to those strategies that will have higher payoff when considering all other strategies, then there’s no trade to be done. Would be like trying to threaten a spyder with a calm verbal sentence. If the other agent cannot precommit, there is no reason for the powerful agent to punish him for anything, he wouldn’t be able to cooperate anyway, he wouldn’t understand the game and, more importantly in my argument, he wouldn’t be able to follow his precommitment, it would break down eventually, specially since the evidence for it is so abstract and complex. The powerful agent might want to simulate the minor agent suffering anyway, but it would solely amount to sadism.
You might want to consider taking a look at the acausal trade wiki entry, and maybe TDT entry, probabily they can explain things better than me:
http://wiki.lesswrong.com/wiki/Acausal_tradehttp://wiki.lesswrong.com/wiki/TDT
Sure. Thanks for pointing that out. Any acausal trade depends on precommitment, this is the only way an agreement can go across space-time, it is done on the game-theoretical possibilities space—as I am calling it. In the case I am discussing, a powerful agent would only have reason to even consider acausal trading with an agent if that agent can precommit. Otherwise, there is no other way of ensuring acausal cooperation. If the other agent cannot, beforehand, understand the due to the peculiarities of the set of possible strategies is better to always precommit to those strategies that will have higher payoff when considering all other strategies, then there’s no trade to be done. Would be like trying to threaten a spyder with a calm verbal sentence. If the other agent cannot precommit, there is no reason for the powerful agent to punish him for anything, he wouldn’t be able to cooperate anyway, he wouldn’t understand the game and, more importantly in my argument, he wouldn’t be able to follow his precommitment, it would break down eventually, specially since the evidence for it is so abstract and complex. The powerful agent might want to simulate the minor agent suffering anyway, but it would solely amount to sadism. You might want to consider taking a look at the acausal trade wiki entry, and maybe TDT entry, probabily they can explain things better than me: http://wiki.lesswrong.com/wiki/Acausal_trade http://wiki.lesswrong.com/wiki/TDT
This is not a justification, this is several repetitions of the disputed claim in various wordings.
Adding that to the post.