This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.
Relevant, perhaps, but if you absolutely can’t talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal.
Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren’t so poorly built for utility calculations we might not even need AGI in the first place.
Relevant, perhaps, but if you absolutely can’t talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal.
Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren’t so poorly built for utility calculations we might not even need AGI in the first place.