We can assume that, but we can’t make the conclusion about Occam’s Razor that my argument makes. There’s a mistake in it somewhere. A statement like “torture is bad” can never imply a statement like “this physical or mathematical theory is true”; the world doesn’t work like that.
A statement like “torture is bad” can never imply a statement like “this physical or mathematical theory is true”; the world doesn’t work like that.
Of course it can’t imply it, but it can test whether you actually believe it. The bit that says “If you’re still reluctant to push the button, it looks like you already are a believer in the ‘strong Occam’s Razor’ saying simpler theories without local exceptions are ‘more true’” sounds fine to me. Then the only question is, in the long run and outside the context of weird hypotheticals, whether this kind of thinking wins more than it loses.
But we can reframe it not to talk about morality, and keep things on the “is” side of the divide.
Suppose you are a paperclip maximizer, and imagine you have a sealed box with 50 paperclips in it. You have a machine with a button which, if pressed, will create 5 paperclips and give them to you, and will vaporize the contents of the box, while not visibly affecting the box or anything external to it. Consider the following physical theory: Right after you sealed the box, our laws of physics will make a temporary exception and will immediately teleport the paperclips to the core of a distant planet, where they will be safe and intact indefinitely. Given that this makes the same observational predictions as our current understanding of the laws of physics, would pressing the button be the paperclip-maximizing thing to do?
If I were a paperclip maximizer, I would not press the button. If that means accepting the “strong Occam’s Razor”, so be it.
If I were a paperclip maximizer, I would not press the button.
This is begging the question. The answer depends on the implementation of the maximizer. Of course, if you have a “strong Occamian” prior, you imagine a paperclip maximizer based on that!
Okay, but… what decision actually maximizes paperclips? The world where the 50 paperclips have been teleported to safety may be indistinguishable, from the agent’s perspective, from the world where the laws of physics went on working as they usually do, but… I guess I’m having trouble imagining holding an epistemology where those are considered equivalent worlds rather than just equivalent states of knowledge. That seems like it’s starting to get into ontological relativism.
Suppose you’ve just pressed the button. You’re you, not a paperclip maximizer; you don’t care about paperclips, you just wanted to see what happens, because you have another device: it has one button, and an LED. If you press the button, the LED will light up if and only if the paperclips were teleported to safety due to a previously unknown law of physics. You press the button. The light turns on. How surprised are you?
And a paperclipper with an anti-Occamian prior that does push the button is revealing a different answer to the supposedly meaningless question.
Either way, it is a assigning utility to stuff it cannot observe, and this shows that questions about the implied invisible, about the differences in theories with no observable differences, can be important.
If I were a paperclip maximizer, I would not press the button.
With all due respect, you don’t know that. It depends on the implementation of the paperclip maximizer, and how to “properly” implement it is exactly the issue we’re discussing here.
What, we can’t just assume for now that torture is bad without getting into metaethics?
We can assume that, but we can’t make the conclusion about Occam’s Razor that my argument makes. There’s a mistake in it somewhere. A statement like “torture is bad” can never imply a statement like “this physical or mathematical theory is true”; the world doesn’t work like that.
Of course it can’t imply it, but it can test whether you actually believe it. The bit that says “If you’re still reluctant to push the button, it looks like you already are a believer in the ‘strong Occam’s Razor’ saying simpler theories without local exceptions are ‘more true’” sounds fine to me. Then the only question is, in the long run and outside the context of weird hypotheticals, whether this kind of thinking wins more than it loses.
But we can reframe it not to talk about morality, and keep things on the “is” side of the divide.
Suppose you are a paperclip maximizer, and imagine you have a sealed box with 50 paperclips in it. You have a machine with a button which, if pressed, will create 5 paperclips and give them to you, and will vaporize the contents of the box, while not visibly affecting the box or anything external to it. Consider the following physical theory: Right after you sealed the box, our laws of physics will make a temporary exception and will immediately teleport the paperclips to the core of a distant planet, where they will be safe and intact indefinitely. Given that this makes the same observational predictions as our current understanding of the laws of physics, would pressing the button be the paperclip-maximizing thing to do?
If I were a paperclip maximizer, I would not press the button. If that means accepting the “strong Occam’s Razor”, so be it.
This is begging the question. The answer depends on the implementation of the maximizer. Of course, if you have a “strong Occamian” prior, you imagine a paperclip maximizer based on that!
Okay, but… what decision actually maximizes paperclips? The world where the 50 paperclips have been teleported to safety may be indistinguishable, from the agent’s perspective, from the world where the laws of physics went on working as they usually do, but… I guess I’m having trouble imagining holding an epistemology where those are considered equivalent worlds rather than just equivalent states of knowledge. That seems like it’s starting to get into ontological relativism.
Suppose you’ve just pressed the button. You’re you, not a paperclip maximizer; you don’t care about paperclips, you just wanted to see what happens, because you have another device: it has one button, and an LED. If you press the button, the LED will light up if and only if the paperclips were teleported to safety due to a previously unknown law of physics. You press the button. The light turns on. How surprised are you?
And a paperclipper with an anti-Occamian prior that does push the button is revealing a different answer to the supposedly meaningless question.
Either way, it is a assigning utility to stuff it cannot observe, and this shows that questions about the implied invisible, about the differences in theories with no observable differences, can be important.
With all due respect, you don’t know that. It depends on the implementation of the paperclip maximizer, and how to “properly” implement it is exactly the issue we’re discussing here.