Okay, but… what decision actually maximizes paperclips? The world where the 50 paperclips have been teleported to safety may be indistinguishable, from the agent’s perspective, from the world where the laws of physics went on working as they usually do, but… I guess I’m having trouble imagining holding an epistemology where those are considered equivalent worlds rather than just equivalent states of knowledge. That seems like it’s starting to get into ontological relativism.
Suppose you’ve just pressed the button. You’re you, not a paperclip maximizer; you don’t care about paperclips, you just wanted to see what happens, because you have another device: it has one button, and an LED. If you press the button, the LED will light up if and only if the paperclips were teleported to safety due to a previously unknown law of physics. You press the button. The light turns on. How surprised are you?
Okay, but… what decision actually maximizes paperclips? The world where the 50 paperclips have been teleported to safety may be indistinguishable, from the agent’s perspective, from the world where the laws of physics went on working as they usually do, but… I guess I’m having trouble imagining holding an epistemology where those are considered equivalent worlds rather than just equivalent states of knowledge. That seems like it’s starting to get into ontological relativism.
Suppose you’ve just pressed the button. You’re you, not a paperclip maximizer; you don’t care about paperclips, you just wanted to see what happens, because you have another device: it has one button, and an LED. If you press the button, the LED will light up if and only if the paperclips were teleported to safety due to a previously unknown law of physics. You press the button. The light turns on. How surprised are you?