Not blackmailing in response to that anticipation is a property of the behavior of the blackmailers that seems to have been used in deciding to ignore all blackmail.
Expecting a response to blackmail in the first place is why blackmailers would even exist in the first place.
Suppose there were lots of “stupid” blackmailers around that blackmailed everyone all day, even if no victim ever conformed.
Why would these exist any more than stupid anti-blackmailers (who e.g. go around attacking anyone who would give into blackmail if a blackmailer showed up), if not for a belief that somebody would give in to blackmail?
I think what Nesov is talking about is best described as a mind that will attack conditioned on victim behavior alone (not considering possible behavior changes of the victim in any way). This is different from an N order blackmailer. In fact I think blackmail is the wrong word here (Nesov says that he does not know what blackmail means in this context, so this is not that surprising). For example, instead of seeking behavior modification through threats, such a mind seeks justice through retribution. I think the most likely SI that implements this is extrapolating an evolved minds preferences. The will to seek justice trough retribution leads to behavior changes in many cases, which leads to an evolutionary advantage. But once it has evolved, its a preference. If a guy committed a horrific crime (completely ignoring all sorts of law enforcement threats), and then it was somehow ensured that he could never hurt anyone again, most people would want justice (and other evolved minds might have made the same simplification (“if someone does that, I will hit them” is a relatively easily encoded and relatively effective strategy)).
It is true that there might exist minds that will see the act of “giving in to retribution seekers” as deserving of retribution, and this could in principle cancel out all other retribution seekers. It would seem like privileging the hypothesis to think that all such things cancel out completely. You might have absolutely no way of estimating which actions would make people seek retribution against you (I think the most complicating factor is that many considers “non punishment of evildoers” to be worthy of retribution, and others consider “punishment of people that are not actually evildoers” as worthy of retribution), but that is a fact about your map, not a fact about the territory (and unlike the blackmail thing, this is not an instance of ignorance to be celebrated). And the original topic was what an SI would do.
An SI would presumably be able to estimate this. In the case of an SI that is otherwise indifferent to humans, this cashes out to increased utility for “punish humans to avoid retribution from those that think the non-punishment of humans is worthy of retribution” and increased utility for “treat humans nicely to avoid retribution from those that would seek retribution for not treating them nicely” (those that require extermination is not really that important if that is the default behavior). If the resources it would take to punish or help humans is small, this would reduce probability of extermination, and increase probability of punishment and help. The type of punishment would be in the form that would avoid retribution from those that categorically seek retribution for that type of punishment regardless of what the “crime” was. If there are lots of (evolvable, and likely to be extrapolated) minds that agree that a certain type of punishment (directed at our type of minds) constitute “torture” and that torturers deserve to be punished (completely independently of how this effects their actions), then it will have to find some other form of punishment. So, basically: “increased probability for very clever solutions that satisfy those demanding punishment, while not pissing of those that categorically dislikes certain types of punishments” (so, some sort of convoluted and confusing existence that some (evolvable and retribution inclined) minds consider “good enough punishment”, and others consider “treated acceptably”). At least increased probability of “staying alive a bit longer in some way that costs very little resources”.
This would for example have policy implications for people that assume the many worlds interpretation and does not care about measure. They can no longer launch a bunch of “semi randomized AIs” (not random in the sense of “random neural network connections” but more along the lines of “letting many teams create many designs, and then randomly select which one to run”) and hope that one will turn out ok, and that the others will just kill everyone (since they can no longer be sure that an uncaring AI will kill them, they can no longer be sure that they will wake up in the universe of a caring AI).
(this seems related to what Will talks about sometimes, but using very different terminology)
Expecting a response to blackmail in the first place is why blackmailers would even exist in the first place.
Why would these exist any more than stupid anti-blackmailers (who e.g. go around attacking anyone who would give into blackmail if a blackmailer showed up), if not for a belief that somebody would give in to blackmail?
I think what Nesov is talking about is best described as a mind that will attack conditioned on victim behavior alone (not considering possible behavior changes of the victim in any way). This is different from an N order blackmailer. In fact I think blackmail is the wrong word here (Nesov says that he does not know what blackmail means in this context, so this is not that surprising). For example, instead of seeking behavior modification through threats, such a mind seeks justice through retribution. I think the most likely SI that implements this is extrapolating an evolved minds preferences. The will to seek justice trough retribution leads to behavior changes in many cases, which leads to an evolutionary advantage. But once it has evolved, its a preference. If a guy committed a horrific crime (completely ignoring all sorts of law enforcement threats), and then it was somehow ensured that he could never hurt anyone again, most people would want justice (and other evolved minds might have made the same simplification (“if someone does that, I will hit them” is a relatively easily encoded and relatively effective strategy)).
It is true that there might exist minds that will see the act of “giving in to retribution seekers” as deserving of retribution, and this could in principle cancel out all other retribution seekers. It would seem like privileging the hypothesis to think that all such things cancel out completely. You might have absolutely no way of estimating which actions would make people seek retribution against you (I think the most complicating factor is that many considers “non punishment of evildoers” to be worthy of retribution, and others consider “punishment of people that are not actually evildoers” as worthy of retribution), but that is a fact about your map, not a fact about the territory (and unlike the blackmail thing, this is not an instance of ignorance to be celebrated). And the original topic was what an SI would do.
An SI would presumably be able to estimate this. In the case of an SI that is otherwise indifferent to humans, this cashes out to increased utility for “punish humans to avoid retribution from those that think the non-punishment of humans is worthy of retribution” and increased utility for “treat humans nicely to avoid retribution from those that would seek retribution for not treating them nicely” (those that require extermination is not really that important if that is the default behavior). If the resources it would take to punish or help humans is small, this would reduce probability of extermination, and increase probability of punishment and help. The type of punishment would be in the form that would avoid retribution from those that categorically seek retribution for that type of punishment regardless of what the “crime” was. If there are lots of (evolvable, and likely to be extrapolated) minds that agree that a certain type of punishment (directed at our type of minds) constitute “torture” and that torturers deserve to be punished (completely independently of how this effects their actions), then it will have to find some other form of punishment. So, basically: “increased probability for very clever solutions that satisfy those demanding punishment, while not pissing of those that categorically dislikes certain types of punishments” (so, some sort of convoluted and confusing existence that some (evolvable and retribution inclined) minds consider “good enough punishment”, and others consider “treated acceptably”). At least increased probability of “staying alive a bit longer in some way that costs very little resources”.
This would for example have policy implications for people that assume the many worlds interpretation and does not care about measure. They can no longer launch a bunch of “semi randomized AIs” (not random in the sense of “random neural network connections” but more along the lines of “letting many teams create many designs, and then randomly select which one to run”) and hope that one will turn out ok, and that the others will just kill everyone (since they can no longer be sure that an uncaring AI will kill them, they can no longer be sure that they will wake up in the universe of a caring AI).
(this seems related to what Will talks about sometimes, but using very different terminology)
Agreed that this is a different case, since it doesn’t originate in any expectation of behavior modification.