Sure, but Bot is only being adversarial against Alice, not Eliezer, since it makes the decision to precommit before learning whether its opponent is Alice or Eliezer. (To be clear, here I just use the shorthands Alice = less sophisticated opponent and Eliezer = more sophisticated opponent.) To put it differently, Bot would have made the same decision even if it was sure that Eliezer would punish precommitment, since according to its model at the time it makes the precommitment, its opponent is more likely to be an Alice than an Eliezer.
So the only motivation for Eliezer to punish the precommitment would be if he is “offended at the unfairness on Alice’s behalf”, i.e. if his notion of fairness depends on Alice’s utility function as though it were Eliezer’s.
Sure, but Bot is only being adversarial against Alice, not Eliezer, since it makes the decision to precommit before learning whether its opponent is Alice or Eliezer. (To be clear, here I just use the shorthands Alice = less sophisticated opponent and Eliezer = more sophisticated opponent.) To put it differently, Bot would have made the same decision even if it was sure that Eliezer would punish precommitment, since according to its model at the time it makes the precommitment, its opponent is more likely to be an Alice than an Eliezer.
So the only motivation for Eliezer to punish the precommitment would be if he is “offended at the unfairness on Alice’s behalf”, i.e. if his notion of fairness depends on Alice’s utility function as though it were Eliezer’s.