Then you have to somehow define a specific person (you don’t want it to not count the future iterations of them) and blowing it up (you don’t want it to figure that they blew it up in the last case, on the basis of the butterfly effect. Both of these ideas are easy for a person to understand, but don’t seem to mean a whole lot on deeper levels.
No, I don’t need that. I make the AI indifferent to a particular quantum event. That particular quantum event is a detonating event in a particular setup with explosives. The AI will act as if it believed that that quantum event will never result in the explosives going off.
The AI is hence not directly indifferent to that specific person and explosives, but only as a consequence of acting as if it believed that the detonator would never work.
In that case, it may decide to move its mainframe or the explosives. It seems highly likely that it will upload itself onto the internet to take advantage of all of the computing power.
Then you have to somehow define a specific person (you don’t want it to not count the future iterations of them) and blowing it up (you don’t want it to figure that they blew it up in the last case, on the basis of the butterfly effect. Both of these ideas are easy for a person to understand, but don’t seem to mean a whole lot on deeper levels.
No, I don’t need that. I make the AI indifferent to a particular quantum event. That particular quantum event is a detonating event in a particular setup with explosives. The AI will act as if it believed that that quantum event will never result in the explosives going off.
The AI is hence not directly indifferent to that specific person and explosives, but only as a consequence of acting as if it believed that the detonator would never work.
In that case, it may decide to move its mainframe or the explosives. It seems highly likely that it will upload itself onto the internet to take advantage of all of the computing power.