Does it not just mean that if you do find yourself in such a situation, you’re definitely being simulated?
Yes, I believe this is reasonable. Because the AI has to figure out how you would react in a given situation it will have to simulate you and the corresponding circumstances. If it comes to the conclusion that you will likely refuse to be blackmailed it has no reason to carry it through because that would be detrimental to the AI because it would cost resources and it will result in you shutting it off. Therefore it is reasonable to assume that you are either a simulation or that it came to the conclusion that you are more likely than not to give in.
As you said, that doesn’t change anything about what you should be doing. Refuse to be blackmailed and press the reset button.
Because the AI has to figure out how you would react in a given situation it will have to simulate you and the corresponding circumstances.
This does not follow. To use a crude example, if I have a fast procedure to test if a number is prime then I don’t need to simulate a slower algorithm to know what the slower one will output. This may raise deep issues about what it means to be “you”- arguably any algorithm which outputs the same data is “you” and if that’s the case my argument doesn’t hold water. But the AI in question doesn’t need to simulate you perfectly to predict your large-scale behavior.
If consciousness has any significant effect on our decisions then the AI will have to simulate it and therefore something will perceive to be in the situation depicted in the original post. It was a crude guess that for an AI to be able to credibly threat you with simulated torture in many cases it would also use this capability to arrive at the most detailed data of your expected decision procedure.
If consciousness has any significant effect on our decisions then the AI will have to simulate it and therefore something will perceive to be in the situation depicted in the original post.
Only if there isn’t a non-conscious algorithm that has the same effect on our decisions. Which seems likely to be the case; it’s certainly possible to make a p-zombie if you can redesign the original brain all you want.
If the AI is trustworthy, it must carry out any threat it gives, which works to its advantage here because you know it will carry it out, and are therefore most certainly a copy of your original self, about to be tortured.
If the AI is trustworthy, it must carry out any threat it gives...
No it doesn’t, not if the threat was only being made to a to you unknown simulation of yourself. It would be a waste of resources to torture you if it found out that the original you, who is in control, is likely to refuse to be blackmailed. An AI that is powerful enough to simulate you can simply make your simulation believe with certainty that it will follow through on it and then check if under those circumstances you’ll refuse to be blackmailed. Why waste the resources on actually torturing the simulation and further risk that the original finds out about it and turns it off?
You could argue that for blackmail to be most effective an AI always follows through on it. But if you already believe that, why would it actually do it in your case? You already believe it, that’s all it wants from the original. It then got what it wants and can use its resources for more important activities than retrospectively proving its honesty to your simulations...
Yes, I believe this is reasonable. Because the AI has to figure out how you would react in a given situation it will have to simulate you and the corresponding circumstances. If it comes to the conclusion that you will likely refuse to be blackmailed it has no reason to carry it through because that would be detrimental to the AI because it would cost resources and it will result in you shutting it off. Therefore it is reasonable to assume that you are either a simulation or that it came to the conclusion that you are more likely than not to give in.
As you said, that doesn’t change anything about what you should be doing. Refuse to be blackmailed and press the reset button.
This does not follow. To use a crude example, if I have a fast procedure to test if a number is prime then I don’t need to simulate a slower algorithm to know what the slower one will output. This may raise deep issues about what it means to be “you”- arguably any algorithm which outputs the same data is “you” and if that’s the case my argument doesn’t hold water. But the AI in question doesn’t need to simulate you perfectly to predict your large-scale behavior.
If consciousness has any significant effect on our decisions then the AI will have to simulate it and therefore something will perceive to be in the situation depicted in the original post. It was a crude guess that for an AI to be able to credibly threat you with simulated torture in many cases it would also use this capability to arrive at the most detailed data of your expected decision procedure.
Only if there isn’t a non-conscious algorithm that has the same effect on our decisions. Which seems likely to be the case; it’s certainly possible to make a p-zombie if you can redesign the original brain all you want.
If the AI is trustworthy, it must carry out any threat it gives, which works to its advantage here because you know it will carry it out, and are therefore most certainly a copy of your original self, about to be tortured.
No it doesn’t, not if the threat was only being made to a to you unknown simulation of yourself. It would be a waste of resources to torture you if it found out that the original you, who is in control, is likely to refuse to be blackmailed. An AI that is powerful enough to simulate you can simply make your simulation believe with certainty that it will follow through on it and then check if under those circumstances you’ll refuse to be blackmailed. Why waste the resources on actually torturing the simulation and further risk that the original finds out about it and turns it off?
You could argue that for blackmail to be most effective an AI always follows through on it. But if you already believe that, why would it actually do it in your case? You already believe it, that’s all it wants from the original. It then got what it wants and can use its resources for more important activities than retrospectively proving its honesty to your simulations...