But because Eliezer would never precommit to probably turn down a rock with an un-Shapley offer painted on its front (because non-agents bearing fixed offers created ex nihilo cannot be deterred or made less likely through any precommitment) there’s always some state for Bot to stumble into in its path of reflection and self-modification where Bot comes out on top.
This is exactly why Eliezer (and I) would turn down a rock with an unfair offer. Sure, there’s some tiny chance that it was indeed created ex nihilo, but it’s far more likely that it was produced by some process that deliberately tried to hide the process that produced the offer.
Moreover, if an as-of-yet ignorant Bot has some premonition that learning more about Eliezer will make Bot encounter truths he’d rather not encounter, Bot can self-modify on the basis of that premonition, before risking reading up on Eliezer on LessWrong.
This depends on the basis of that premonition. At every point, Bot is considering the effect of its commitment on some space of possible agents, which gets narrowed down whenever Bot learns more about Eliezer. If Bot knows everything about Eliezer when it makes the commitment, then of course Eliezer should not give in. If Bot knows some evidence, then actual-Eliezer is essentially in a cooperation-dilemma with the possible agents that Bot thinks Eliezer could be. Then, Eliezer should not give in if he thinks that he can logically cooperate with enough of the possible agents to make Bot’s commitment unwise.
This isn’t true all of the time; I expect that a North Korean version of Eliezer would give into threats, since he would be unable to logically cooperate with enough of the population to make the government see threat-making as pointless. Still, I do expect that in the situations which Eliezer (and I, for that matter) encounters in the future to not be like this.
This is exactly why Eliezer (and I) would turn down a rock with an unfair offer. Sure, there’s some tiny chance that it was indeed created ex nihilo, but it’s far more likely that it was produced by some process that deliberately tried to hide the process that produced the offer.
This depends on the basis of that premonition. At every point, Bot is considering the effect of its commitment on some space of possible agents, which gets narrowed down whenever Bot learns more about Eliezer. If Bot knows everything about Eliezer when it makes the commitment, then of course Eliezer should not give in. If Bot knows some evidence, then actual-Eliezer is essentially in a cooperation-dilemma with the possible agents that Bot thinks Eliezer could be. Then, Eliezer should not give in if he thinks that he can logically cooperate with enough of the possible agents to make Bot’s commitment unwise.
This isn’t true all of the time; I expect that a North Korean version of Eliezer would give into threats, since he would be unable to logically cooperate with enough of the population to make the government see threat-making as pointless. Still, I do expect that in the situations which Eliezer (and I, for that matter) encounters in the future to not be like this.