‘Utility Indifference’ (2010) by FHI researcher Stuart Armstrong

I just noticed that LessWrong has not yet linked to FHI researcher Stuart Amstrong’s brief technical report, Utility Indifference (2010). It opens:

Consider an AI that follows its own motivations. We’re not entirely sure what its motivations are, but we would prefer that the AI cooperate with humanity; or, failing that, that we can destroy it before it defects. We’ll have someone sitting in a room, their finger on a detonator, ready at the slightest hint of defection.

Unfortunately as has been noted… this does not preclude the AI from misbehaving. It just means that the AI must act to take control of the explosives, the detonators or the human who will press the button. For a superlatively intelligence AI, this would represent merely a slight extra difficulty. But now imagine that the AI was somehow indifferent to the explosives going off or not (but that nothing else was changed). Then if ever the AI does decide to defect, it will most likely do so without taking control of the explosives, as that would be easier than otherwise. By “easier” we mean that the chances of failure are less, since the plan is simpler… recall that under these assumptions, the AI counts getting blown up as an equal value to successfully defecting. How could we accomplish this indifference?