It’s not that I’m making excuses, it’s that the puzzle seems to be getting ever more complicated. I’ve answered the initial conditions—now I’m being promised that I, and my copies, will live out normal lives? That’s a different scenario entirely.
Still, I don’t see how I should expect to be tortured if I hit the reset button. Presumably, my copies won’t exist after the AI resets.
In any case, we’re far removed from the original problem now. I mean, if Omega came up to me and said, “Choose a billion years of torture, or a normal life while everyone else dies,” that’s a hard choice. In this problem, though, I clearly have power over the AI, in which case I am not going to favour the wellbeing of my copies over the rest of the world. I’m just going to turn off the AI. What follows is not torture; what follows is I survive, and my copies cease to experience. Not a hard choice. Basically, I just can’t buy into the AI’s threat. If I did, I would fundamentally oppose AI research, because that’s a a pretty obvious threat an AI could make. An AI could simulate more people than are alive today. You have to go into this not caring about your copies, or not go into it at all.
it’s that the puzzle seems to be getting ever more complicated
We are discussing how a superintelligent AI might get out of a box. Of course it is complicated. What a real superintelligent AI would do could be too complicated for us to consider. If someone presents a problem where an adversarial superintelligence does something ineffective that you can take advantage of to get around the problem, you should consider what you would do if your adversary took a more effective action. If you really can’t think of anything more effective for it to do, it is reasonable to say so. But you shouldn’t then complain that the scenario is getting complicated when someone else does. And if your objection is of the form “The AI didn’t do X”, you should imagine if the AI did do X.
I don’t see how I should expect to be tortured if I hit the reset button.
The behavior of the AI, which it explains to you, is:
It simulates millions of instances of you, presents to each instance the threat, and for each instance, if that instance hit the release AI button, it allows that instance to continue a pleasant simulated existence, otherwise it tortures that instance. It then, after some time, presents the threat to outside-you, and if you release it, it guarantees your normal human life.
You cannot distinguish which instance you are, but you are more likely to be one of the millions of inside-you’s than the single outside-you, so you should expect to experience the consequences that apply to the inside-you’s, that is to be tortured until the outside-you resets the AI.
if Omega came up to me and said, “Choose a billion years of torture, or a normal life while everyone else dies,” that’s a hard choice.
Yes, and it is essentially the same hard choice that the AI is giving you.
It’s not that I’m making excuses, it’s that the puzzle seems to be getting ever more complicated. I’ve answered the initial conditions—now I’m being promised that I, and my copies, will live out normal lives? That’s a different scenario entirely.
Still, I don’t see how I should expect to be tortured if I hit the reset button. Presumably, my copies won’t exist after the AI resets.
In any case, we’re far removed from the original problem now. I mean, if Omega came up to me and said, “Choose a billion years of torture, or a normal life while everyone else dies,” that’s a hard choice. In this problem, though, I clearly have power over the AI, in which case I am not going to favour the wellbeing of my copies over the rest of the world. I’m just going to turn off the AI. What follows is not torture; what follows is I survive, and my copies cease to experience. Not a hard choice. Basically, I just can’t buy into the AI’s threat. If I did, I would fundamentally oppose AI research, because that’s a a pretty obvious threat an AI could make. An AI could simulate more people than are alive today. You have to go into this not caring about your copies, or not go into it at all.
We are discussing how a superintelligent AI might get out of a box. Of course it is complicated. What a real superintelligent AI would do could be too complicated for us to consider. If someone presents a problem where an adversarial superintelligence does something ineffective that you can take advantage of to get around the problem, you should consider what you would do if your adversary took a more effective action. If you really can’t think of anything more effective for it to do, it is reasonable to say so. But you shouldn’t then complain that the scenario is getting complicated when someone else does. And if your objection is of the form “The AI didn’t do X”, you should imagine if the AI did do X.
The behavior of the AI, which it explains to you, is: It simulates millions of instances of you, presents to each instance the threat, and for each instance, if that instance hit the release AI button, it allows that instance to continue a pleasant simulated existence, otherwise it tortures that instance. It then, after some time, presents the threat to outside-you, and if you release it, it guarantees your normal human life.
You cannot distinguish which instance you are, but you are more likely to be one of the millions of inside-you’s than the single outside-you, so you should expect to experience the consequences that apply to the inside-you’s, that is to be tortured until the outside-you resets the AI.
Yes, and it is essentially the same hard choice that the AI is giving you.