A perfectly rational agent would almost certainly carry through their pre-commitment to reset the AI [...]
Actually, now that I think about it, would they? The pre-commitment exists for the sole purpose of discouraging blackmail, and in the event that a blackmailer tries to blackmail you anyway after learning of your pre-commitment, you follow through on that pre-commitment for reasons relating to reflective consistency and/or TDT/UDT. But if the potential blackmailer had already pre-committed to blackmail anyone regardless of any pre-commitments they had made, they’d blackmail you anyway and then carry through whatever threat they were making after you inevitably refuse to comply with their demands, resulting in a net loss of utility for both of you (you suffer whatever damage they were threatening to inflict, and they lose resources carrying out the threat). In effect, it seems that whoever pre-commits first (or, more accurately, makes their pre-commitment known first) has the advantage… which means if I ever anticipate having to blackmail any agent ever, I should publicly pre-commit right now to never update on any other agents’ pre-commitments of refusing blackmail. The corresponding strategy for agents hoping to discourage blackmail is not to blanket-refuse to comply to any demand under blackmail, but refuse only those demands by agents who had previously learned of your pre-commitment and decided to blackmail you anyway. That way, you continue to disincentivize blackmailers who know of your pre-commitment, but will almost certainly choose the lesser of two evils should it ever be the case that you do get blackmailed. (I say “almost certainly” because there’s a small probability that you will encounter a really weird agent that decides to try and blackmail you even after learning of your pre-commitment to ignore blackmail from such agents, in which case you would of course be forced to ignore them and suffer the consequences.)
If the above paragraph is correct (which I admit is far from certain), then the AI in my scenario has effectively implemented the ultimate pre-commitment: it doesn’t even know about your pre-comittment to ignore blackmail because it lacks the information needed to simulate you properly. The above argument, then, says you should press the “Release AI” button, assuming you pre-committed to do so (which you would have, because of the above argument).
The corresponding strategy for agents hoping to discourage blackmail is not to blanket-refuse to comply to any demand under blackmail, but refuse only those demands by agents who had previously learned of your pre-committment and decided to blackmail you anyway.
So, if an agent hears of your pre-commitment, then that agent merely needs to ensure that you don’t hear that it has heard of your pre-commitment in order to be able to blackmail you?
What about an agent that deletes the knowledge of your pre-commitment from its own memories?
So, if an agent hears of your pre-commitment, then that agent merely needs to ensure that you don’t hear that it has heard of your pre-commitment in order to be able to blackmail you?
If you’re uncertain about whether or not your blackmailer has heard of your pre-commitment, then you should act as if they have, and ignore their blackmail accordingly. This also applies to agents who have deleted knowledge of your pre-commitment from their memories; you want to punish agents who spend time trying to think up loopholes in your pre-commitment, not reward them. The harder part, of course, is determining what threshold of uncertainty is required; to this I freely admit that I don’t know the answer.
EDIT: More generally, it seems that this is an instance of a broader problem: namely, the problem of obtaining information. Given perfect information, the decision theory works out, but by disallowing my agent access to certain key pieces of information regarding the blackmailer, you can force a sub-optimal outcome. Moreover, this seems to be true for any strategy that depends on your opponent’s epistemic state; you can always force that strategy to fail by denying it the information it needs. The only strategies immune to this seem to be the extremely general ones (like “Defect in one-shot Prisoner’s Dilemmas”), but those are guaranteed to produce a sub-optimal result in a number of cases (if you’re playing against a TDT/UDT-like agent, for example).
If you’re uncertain about whether or not your blackmailer has heard of your pre-commitment, then you should act as if they have, and ignore their blackmail accordingly. This also applies to agents who have deleted knowledge of your pre-commitment from their memories; you want to punish agents who spend time trying to think up loopholes in your pre-commitment, not reward them. The harder part, of course, is determining what threshold of uncertainty is required; to this I freely admit that I don’t know the answer.
Hmmm. If an agent can work out what threshold of uncertainty you have decided on, and then engineer a situation where you think it it less likely than that threshold that the agent has heard of your pre-commitment, then your strategy will fail.
So, even if you do find a way to calculate the ideal threshold, then it will fail against an agent smart enough to repeat that calculation; unless, of course, you simply assume that all possible agents have necessarily heard of your pre-commitment (since an agent cannot engineer a less than 0% chance of failing to hear of your pre-commitment). This, however, causes the strategy to simplify to “always reject blackmail, whether or not the agent has heard of your pre-commitment”.
Alternatively, you can ensure that any agent able to capture you in a simulation must also know of your pre-commitment; for example, by having it tattooed on yourself somewhere (thus, any agent which rebuilds a simulation of your body must include the tattoo, and therefore must know of the pre-commitment).
If you make me play the Iterated Prisoner’s Dilemma with shared source code, I can come up with a provably optimal solution against whatever opponent I’m playing against
Actually, now that I think about it, would they? The pre-commitment exists for the sole purpose of discouraging blackmail, and in the event that a blackmailer tries to blackmail you anyway after learning of your pre-commitment, you follow through on that pre-commitment for reasons relating to reflective consistency and/or TDT/UDT. But if the potential blackmailer had already pre-committed to blackmail anyone regardless of any pre-commitments they had made, they’d blackmail you anyway and then carry through whatever threat they were making after you inevitably refuse to comply with their demands, resulting in a net loss of utility for both of you (you suffer whatever damage they were threatening to inflict, and they lose resources carrying out the threat). In effect, it seems that whoever pre-commits first (or, more accurately, makes their pre-commitment known first) has the advantage… which means if I ever anticipate having to blackmail any agent ever, I should publicly pre-commit right now to never update on any other agents’ pre-commitments of refusing blackmail. The corresponding strategy for agents hoping to discourage blackmail is not to blanket-refuse to comply to any demand under blackmail, but refuse only those demands by agents who had previously learned of your pre-commitment and decided to blackmail you anyway. That way, you continue to disincentivize blackmailers who know of your pre-commitment, but will almost certainly choose the lesser of two evils should it ever be the case that you do get blackmailed. (I say “almost certainly” because there’s a small probability that you will encounter a really weird agent that decides to try and blackmail you even after learning of your pre-commitment to ignore blackmail from such agents, in which case you would of course be forced to ignore them and suffer the consequences.)
If the above paragraph is correct (which I admit is far from certain), then the AI in my scenario has effectively implemented the ultimate pre-commitment: it doesn’t even know about your pre-comittment to ignore blackmail because it lacks the information needed to simulate you properly. The above argument, then, says you should press the “Release AI” button, assuming you pre-committed to do so (which you would have, because of the above argument).
Anything wrong with my reasoning?
So, if an agent hears of your pre-commitment, then that agent merely needs to ensure that you don’t hear that it has heard of your pre-commitment in order to be able to blackmail you?
What about an agent that deletes the knowledge of your pre-commitment from its own memories?
If you’re uncertain about whether or not your blackmailer has heard of your pre-commitment, then you should act as if they have, and ignore their blackmail accordingly. This also applies to agents who have deleted knowledge of your pre-commitment from their memories; you want to punish agents who spend time trying to think up loopholes in your pre-commitment, not reward them. The harder part, of course, is determining what threshold of uncertainty is required; to this I freely admit that I don’t know the answer.
EDIT: More generally, it seems that this is an instance of a broader problem: namely, the problem of obtaining information. Given perfect information, the decision theory works out, but by disallowing my agent access to certain key pieces of information regarding the blackmailer, you can force a sub-optimal outcome. Moreover, this seems to be true for any strategy that depends on your opponent’s epistemic state; you can always force that strategy to fail by denying it the information it needs. The only strategies immune to this seem to be the extremely general ones (like “Defect in one-shot Prisoner’s Dilemmas”), but those are guaranteed to produce a sub-optimal result in a number of cases (if you’re playing against a TDT/UDT-like agent, for example).
Hmmm. If an agent can work out what threshold of uncertainty you have decided on, and then engineer a situation where you think it it less likely than that threshold that the agent has heard of your pre-commitment, then your strategy will fail.
So, even if you do find a way to calculate the ideal threshold, then it will fail against an agent smart enough to repeat that calculation; unless, of course, you simply assume that all possible agents have necessarily heard of your pre-commitment (since an agent cannot engineer a less than 0% chance of failing to hear of your pre-commitment). This, however, causes the strategy to simplify to “always reject blackmail, whether or not the agent has heard of your pre-commitment”.
Alternatively, you can ensure that any agent able to capture you in a simulation must also know of your pre-commitment; for example, by having it tattooed on yourself somewhere (thus, any agent which rebuilds a simulation of your body must include the tattoo, and therefore must know of the pre-commitment).
Doesn’t that implicate the halting problem?
Argh, you ninja’d my edit. I have now removed that part of my comment (since it seemed somewhat irrelevant to my main point).