Humans don’t follow any decision theory consistently. They sometimes give in to blackmail, and at other times resist blackmail. If you convinced a bunch of people to take acausal blackmail seriously, presumably some subset would give in and some subset would resist, since that’s what we see in ordinary blackmail situations. What would be interesting is if (a) there were some applicable reasoning norm that forced us to give in to acausal blackmail on pain of irrationality, or (b) there were some known human irrationality that made us inevitably susceptible to acausal blackmail. But I don’t think Roko gave a good argument for either of those claims.
From my last comment: “there are probably some decision theories that let agents acausally blackmail each other”. But if humans frequently make use of heuristics like ‘punish blackmailers’ and ‘never give in to blackmailers’, and if normative decision theory says they’re right to do so, there’s less practical import to ‘blackmailable agents are possible’.
This implies that the future AI uses a decision theory that two-boxes in Newcomb’s problem, contradicting the premise that it one-boxes.
No it doesn’t. If you model Newcomb’s problem as a Prisoner’s Dilemma, then one-boxing maps on to cooperating and two-boxing maps on to defecting. For Omega, cooperating means ‘I put money in both boxes’ and defecting means ‘I put money in just one box’. TDT recognizes that the only two options are mutual cooperation or mutual defection, so TDT cooperates.
Blackmail works analogously. Perhaps the blackmailer has five demands. For the blackmailee, full cooperation means ‘giving in to all five demands’; full defection means ‘rejecting all five demands’; and there are also intermediary levels (e.g., giving in to two demands while rejecting the other three), with the blackmailee prefer to do as little as possible.
For the blackmailer, full cooperation means ‘expending resources to punish the blackmailee in proportion to how many of my demands were met’. Full defection means ‘expending no resources to punish the blackmailee even if some demands aren’t met’. In other words, since harming past agents is costly, a blackmailer’s favorite scenario is always ‘the blackmailee, fearing punishment, gives in to most or all of my demands; but I don’t bother punishing them regardless of how many of my demands they ignored’. We could say that full defection doesn’t even bother to check how many of the demands were met, except insofar as this is useful for other goals.
The blackmailer wants to look as scary as possible (to get the blackmailee to cooperate) and then defect at the last moment anyway (by not following through on the threat), if at all possible. In terms of Newcomb’s problem, this is the same as preferring to trick Omega into thinking you’ll one-box, and then two-boxing anyway. We usually construct Newcomb’s problem in such a way that this is impossible; therefore TDT cooperates. But in the real world mutual cooperation of this sort is difficult to engineer, which makes fully credible acausal blackmail at least as difficult.
This implies that the future AI will have a deontological rule that says “Don’t blackmail” somehow hard-coded in it, contradicting the premise that it will be an utilitarian.
I think you misunderstood point 3. 3 is a follow-up to 2: humans and AI systems alike have incentives to discourage blackmail, which increases the likelihood that blackmail is a self-defeating strategy.
Shut up and multiply.
Eliezer has endorsed the claim “two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one”. This doesn’t tell us how bad the act of blackmail itself is, it doesn’t tell us how faithfully we should implement that idea in autonomous AI systems, and it doesn’t tell us how likely it is that a superintelligent AI would find itself forced into this particular moral dilemma.
Since Eliezer asserts a CEV-based agent wouldn’t blackmail humans, the next step in shoring up Roko’s argument would be to do more to connect the dots from “two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one” to a real-world worry about AI systems actually blackmailing people conditional on claims (a) and (c). ‘I find it scary to think a superintelligent AI might follow the kind of reasoning that can ever privilege torture over dust specks’ is not the same thing as ‘I’m scared a superintelligent AI will actually torture people because this will in fact be the best way to prevent a superastronomically large number of dust specks from ending up in people’s eyes’, so Roko’s particular argument has a high evidential burden.
Humans don’t follow any decision theory consistently. They sometimes give in to blackmail, and at other times resist blackmail. If you convinced a bunch of people to take acausal blackmail seriously, presumably some subset would give in and some subset would resist, since that’s what we see in ordinary blackmail situations. What would be interesting is if (a) there were some applicable reasoning norm that forced us to give in to acausal blackmail on pain of irrationality, or (b) there were some known human irrationality that made us inevitably susceptible to acausal blackmail. But I don’t think Roko gave a good argument for either of those claims.
From my last comment: “there are probably some decision theories that let agents acausally blackmail each other”. But if humans frequently make use of heuristics like ‘punish blackmailers’ and ‘never give in to blackmailers’, and if normative decision theory says they’re right to do so, there’s less practical import to ‘blackmailable agents are possible’.
No it doesn’t. If you model Newcomb’s problem as a Prisoner’s Dilemma, then one-boxing maps on to cooperating and two-boxing maps on to defecting. For Omega, cooperating means ‘I put money in both boxes’ and defecting means ‘I put money in just one box’. TDT recognizes that the only two options are mutual cooperation or mutual defection, so TDT cooperates.
Blackmail works analogously. Perhaps the blackmailer has five demands. For the blackmailee, full cooperation means ‘giving in to all five demands’; full defection means ‘rejecting all five demands’; and there are also intermediary levels (e.g., giving in to two demands while rejecting the other three), with the blackmailee prefer to do as little as possible.
For the blackmailer, full cooperation means ‘expending resources to punish the blackmailee in proportion to how many of my demands were met’. Full defection means ‘expending no resources to punish the blackmailee even if some demands aren’t met’. In other words, since harming past agents is costly, a blackmailer’s favorite scenario is always ‘the blackmailee, fearing punishment, gives in to most or all of my demands; but I don’t bother punishing them regardless of how many of my demands they ignored’. We could say that full defection doesn’t even bother to check how many of the demands were met, except insofar as this is useful for other goals.
The blackmailer wants to look as scary as possible (to get the blackmailee to cooperate) and then defect at the last moment anyway (by not following through on the threat), if at all possible. In terms of Newcomb’s problem, this is the same as preferring to trick Omega into thinking you’ll one-box, and then two-boxing anyway. We usually construct Newcomb’s problem in such a way that this is impossible; therefore TDT cooperates. But in the real world mutual cooperation of this sort is difficult to engineer, which makes fully credible acausal blackmail at least as difficult.
I think you misunderstood point 3. 3 is a follow-up to 2: humans and AI systems alike have incentives to discourage blackmail, which increases the likelihood that blackmail is a self-defeating strategy.
Eliezer has endorsed the claim “two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one”. This doesn’t tell us how bad the act of blackmail itself is, it doesn’t tell us how faithfully we should implement that idea in autonomous AI systems, and it doesn’t tell us how likely it is that a superintelligent AI would find itself forced into this particular moral dilemma.
Since Eliezer asserts a CEV-based agent wouldn’t blackmail humans, the next step in shoring up Roko’s argument would be to do more to connect the dots from “two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one” to a real-world worry about AI systems actually blackmailing people conditional on claims (a) and (c). ‘I find it scary to think a superintelligent AI might follow the kind of reasoning that can ever privilege torture over dust specks’ is not the same thing as ‘I’m scared a superintelligent AI will actually torture people because this will in fact be the best way to prevent a superastronomically large number of dust specks from ending up in people’s eyes’, so Roko’s particular argument has a high evidential burden.