but a paperclip maximizer wouldn’t get upset or angry if a supernova destroyed some of its factories, for example.
I probably wouldn’t either. It sounds like the sort of amortized risk that I would have accounted for when I spread the factories out through thousands of star systems. The anger would come in only when the destruction was caused by another optimising entity. And more specifically by another entity that I have modelled as ‘agenty’ and not one that I have intuitively objectified.
I probably wouldn’t either. It sounds like the sort of amortized risk that I would have accounted for when I spread the factories out through thousands of star systems. The anger would come in only when the destruction was caused by another optimising entity. And more specifically by another entity that I have modelled as ‘agenty’ and not one that I have intuitively objectified.