Eliezer would never precommit to probably turn down a rock with an un-Shapley offer painted on its front (because non-agents bearing fixed offers created ex nihilo cannot be deterred or made less likely through any precommitment)
But we do not live in a universe where such rocks commonly appear ex nihilo. So when you come across even a non-agent making an unfair offer, you have to consider the likelihood that it was created by an adversarial agent (as is indeed the case in your Bot example).
Sure, but Bot is only being adversarial against Alice, not Eliezer, since it makes the decision to precommit before learning whether its opponent is Alice or Eliezer. (To be clear, here I just use the shorthands Alice = less sophisticated opponent and Eliezer = more sophisticated opponent.) To put it differently, Bot would have made the same decision even if it was sure that Eliezer would punish precommitment, since according to its model at the time it makes the precommitment, its opponent is more likely to be an Alice than an Eliezer.
So the only motivation for Eliezer to punish the precommitment would be if he is “offended at the unfairness on Alice’s behalf”, i.e. if his notion of fairness depends on Alice’s utility function as though it were Eliezer’s.
But we do not live in a universe where such rocks commonly appear ex nihilo. So when you come across even a non-agent making an unfair offer, you have to consider the likelihood that it was created by an adversarial agent (as is indeed the case in your Bot example).
Sure, but Bot is only being adversarial against Alice, not Eliezer, since it makes the decision to precommit before learning whether its opponent is Alice or Eliezer. (To be clear, here I just use the shorthands Alice = less sophisticated opponent and Eliezer = more sophisticated opponent.) To put it differently, Bot would have made the same decision even if it was sure that Eliezer would punish precommitment, since according to its model at the time it makes the precommitment, its opponent is more likely to be an Alice than an Eliezer.
So the only motivation for Eliezer to punish the precommitment would be if he is “offended at the unfairness on Alice’s behalf”, i.e. if his notion of fairness depends on Alice’s utility function as though it were Eliezer’s.