1 - Humans can’t reliably precommit. Even if they could, precommittment is different than using an “acausal” decision theory. You don’t need precommitment to one-box in Newcomb’s problem, and the ability to precommit doesn’t guarantee by itself that you will one-box. In an adversarial game where the players can precommit and use a causal version of game theory, the one that can precommit first generally wins. E.g. Alice can precommit to ignore Bob’s threats, but she has no incentive to do so if Bob already precommitted to ignore Alice’s precommitments, and so on. If you allow for “acausal” reasoning, then even having a time advantage doesn’t work: if Bob isn’t born yet, but Alice predicts that she will be in an adversarial game with Bob and Bob will reason acausally and therefore he will have an incentive to threaten her and ignore her precommitments, then she has an incentive not to make such precommitment.
2 - This implies that the future AI uses a decision theory that two-boxes in Newcomb’s problem, contradicting the premise that it one-boxes.
3 - This implies that the future AI will have a deontological rule that says “Don’t blackmail” somehow hard-coded in it, contradicting the premise that it will be an utilitarian. Indeed, humans may want to build an AI with such constants, but in order to do so they will have to consider the possibility of blackmail and likely reject utilitarianism, which was the point of Roko’s argument.
Humans don’t follow any decision theory consistently. They sometimes give in to blackmail, and at other times resist blackmail. If you convinced a bunch of people to take acausal blackmail seriously, presumably some subset would give in and some subset would resist, since that’s what we see in ordinary blackmail situations. What would be interesting is if (a) there were some applicable reasoning norm that forced us to give in to acausal blackmail on pain of irrationality, or (b) there were some known human irrationality that made us inevitably susceptible to acausal blackmail. But I don’t think Roko gave a good argument for either of those claims.
From my last comment: “there are probably some decision theories that let agents acausally blackmail each other”. But if humans frequently make use of heuristics like ‘punish blackmailers’ and ‘never give in to blackmailers’, and if normative decision theory says they’re right to do so, there’s less practical import to ‘blackmailable agents are possible’.
This implies that the future AI uses a decision theory that two-boxes in Newcomb’s problem, contradicting the premise that it one-boxes.
No it doesn’t. If you model Newcomb’s problem as a Prisoner’s Dilemma, then one-boxing maps on to cooperating and two-boxing maps on to defecting. For Omega, cooperating means ‘I put money in both boxes’ and defecting means ‘I put money in just one box’. TDT recognizes that the only two options are mutual cooperation or mutual defection, so TDT cooperates.
Blackmail works analogously. Perhaps the blackmailer has five demands. For the blackmailee, full cooperation means ‘giving in to all five demands’; full defection means ‘rejecting all five demands’; and there are also intermediary levels (e.g., giving in to two demands while rejecting the other three), with the blackmailee prefer to do as little as possible.
For the blackmailer, full cooperation means ‘expending resources to punish the blackmailee in proportion to how many of my demands were met’. Full defection means ‘expending no resources to punish the blackmailee even if some demands aren’t met’. In other words, since harming past agents is costly, a blackmailer’s favorite scenario is always ‘the blackmailee, fearing punishment, gives in to most or all of my demands; but I don’t bother punishing them regardless of how many of my demands they ignored’. We could say that full defection doesn’t even bother to check how many of the demands were met, except insofar as this is useful for other goals.
The blackmailer wants to look as scary as possible (to get the blackmailee to cooperate) and then defect at the last moment anyway (by not following through on the threat), if at all possible. In terms of Newcomb’s problem, this is the same as preferring to trick Omega into thinking you’ll one-box, and then two-boxing anyway. We usually construct Newcomb’s problem in such a way that this is impossible; therefore TDT cooperates. But in the real world mutual cooperation of this sort is difficult to engineer, which makes fully credible acausal blackmail at least as difficult.
This implies that the future AI will have a deontological rule that says “Don’t blackmail” somehow hard-coded in it, contradicting the premise that it will be an utilitarian.
I think you misunderstood point 3. 3 is a follow-up to 2: humans and AI systems alike have incentives to discourage blackmail, which increases the likelihood that blackmail is a self-defeating strategy.
Shut up and multiply.
Eliezer has endorsed the claim “two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one”. This doesn’t tell us how bad the act of blackmail itself is, it doesn’t tell us how faithfully we should implement that idea in autonomous AI systems, and it doesn’t tell us how likely it is that a superintelligent AI would find itself forced into this particular moral dilemma.
Since Eliezer asserts a CEV-based agent wouldn’t blackmail humans, the next step in shoring up Roko’s argument would be to do more to connect the dots from “two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one” to a real-world worry about AI systems actually blackmailing people conditional on claims (a) and (c). ‘I find it scary to think a superintelligent AI might follow the kind of reasoning that can ever privilege torture over dust specks’ is not the same thing as ‘I’m scared a superintelligent AI will actually torture people because this will in fact be the best way to prevent a superastronomically large number of dust specks from ending up in people’s eyes’, so Roko’s particular argument has a high evidential burden.
“I precommit to shop at the store with the lowest price within some large distance, even if the cost of the gas and car depreciation to get to a farther store is greater than the savings I get from its lower price. If I do that, stores will have to compete with distant stores based on price, and thus it is more likely that nearby stores will have lower prices. However, this precommitment would only work if I am actually willing to go to the farther store when it has the lowest price even if I lose money”.
You’ve described the mechanism by which the precommitment happened, not actually disputed whether it happens.
Many “irrational” actions by human beings can be analyzed as precommitment; for instance, wanting to take revenge on people who have hurt you even if the revenge doesn’t get you anything.
1 - Humans can’t reliably precommit. Even if they could, precommittment is different than using an “acausal” decision theory. You don’t need precommitment to one-box in Newcomb’s problem, and the ability to precommit doesn’t guarantee by itself that you will one-box. In an adversarial game where the players can precommit and use a causal version of game theory, the one that can precommit first generally wins. E.g. Alice can precommit to ignore Bob’s threats, but she has no incentive to do so if Bob already precommitted to ignore Alice’s precommitments, and so on. If you allow for “acausal” reasoning, then even having a time advantage doesn’t work: if Bob isn’t born yet, but Alice predicts that she will be in an adversarial game with Bob and Bob will reason acausally and therefore he will have an incentive to threaten her and ignore her precommitments, then she has an incentive not to make such precommitment.
2 - This implies that the future AI uses a decision theory that two-boxes in Newcomb’s problem, contradicting the premise that it one-boxes.
3 - This implies that the future AI will have a deontological rule that says “Don’t blackmail” somehow hard-coded in it, contradicting the premise that it will be an utilitarian. Indeed, humans may want to build an AI with such constants, but in order to do so they will have to consider the possibility of blackmail and likely reject utilitarianism, which was the point of Roko’s argument.
4 - Shut up and multiply.
Humans don’t follow any decision theory consistently. They sometimes give in to blackmail, and at other times resist blackmail. If you convinced a bunch of people to take acausal blackmail seriously, presumably some subset would give in and some subset would resist, since that’s what we see in ordinary blackmail situations. What would be interesting is if (a) there were some applicable reasoning norm that forced us to give in to acausal blackmail on pain of irrationality, or (b) there were some known human irrationality that made us inevitably susceptible to acausal blackmail. But I don’t think Roko gave a good argument for either of those claims.
From my last comment: “there are probably some decision theories that let agents acausally blackmail each other”. But if humans frequently make use of heuristics like ‘punish blackmailers’ and ‘never give in to blackmailers’, and if normative decision theory says they’re right to do so, there’s less practical import to ‘blackmailable agents are possible’.
No it doesn’t. If you model Newcomb’s problem as a Prisoner’s Dilemma, then one-boxing maps on to cooperating and two-boxing maps on to defecting. For Omega, cooperating means ‘I put money in both boxes’ and defecting means ‘I put money in just one box’. TDT recognizes that the only two options are mutual cooperation or mutual defection, so TDT cooperates.
Blackmail works analogously. Perhaps the blackmailer has five demands. For the blackmailee, full cooperation means ‘giving in to all five demands’; full defection means ‘rejecting all five demands’; and there are also intermediary levels (e.g., giving in to two demands while rejecting the other three), with the blackmailee prefer to do as little as possible.
For the blackmailer, full cooperation means ‘expending resources to punish the blackmailee in proportion to how many of my demands were met’. Full defection means ‘expending no resources to punish the blackmailee even if some demands aren’t met’. In other words, since harming past agents is costly, a blackmailer’s favorite scenario is always ‘the blackmailee, fearing punishment, gives in to most or all of my demands; but I don’t bother punishing them regardless of how many of my demands they ignored’. We could say that full defection doesn’t even bother to check how many of the demands were met, except insofar as this is useful for other goals.
The blackmailer wants to look as scary as possible (to get the blackmailee to cooperate) and then defect at the last moment anyway (by not following through on the threat), if at all possible. In terms of Newcomb’s problem, this is the same as preferring to trick Omega into thinking you’ll one-box, and then two-boxing anyway. We usually construct Newcomb’s problem in such a way that this is impossible; therefore TDT cooperates. But in the real world mutual cooperation of this sort is difficult to engineer, which makes fully credible acausal blackmail at least as difficult.
I think you misunderstood point 3. 3 is a follow-up to 2: humans and AI systems alike have incentives to discourage blackmail, which increases the likelihood that blackmail is a self-defeating strategy.
Eliezer has endorsed the claim “two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one”. This doesn’t tell us how bad the act of blackmail itself is, it doesn’t tell us how faithfully we should implement that idea in autonomous AI systems, and it doesn’t tell us how likely it is that a superintelligent AI would find itself forced into this particular moral dilemma.
Since Eliezer asserts a CEV-based agent wouldn’t blackmail humans, the next step in shoring up Roko’s argument would be to do more to connect the dots from “two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one” to a real-world worry about AI systems actually blackmailing people conditional on claims (a) and (c). ‘I find it scary to think a superintelligent AI might follow the kind of reasoning that can ever privilege torture over dust specks’ is not the same thing as ‘I’m scared a superintelligent AI will actually torture people because this will in fact be the best way to prevent a superastronomically large number of dust specks from ending up in people’s eyes’, so Roko’s particular argument has a high evidential burden.
“I precommit to shop at the store with the lowest price within some large distance, even if the cost of the gas and car depreciation to get to a farther store is greater than the savings I get from its lower price. If I do that, stores will have to compete with distant stores based on price, and thus it is more likely that nearby stores will have lower prices. However, this precommitment would only work if I am actually willing to go to the farther store when it has the lowest price even if I lose money”.
Miraculously, people do reliably act this way.
I doubt it. Reference?
Mostly because they don’t actually notice the cost of gas and car depreciation at the time...
You’ve described the mechanism by which the precommitment happened, not actually disputed whether it happens.
Many “irrational” actions by human beings can be analyzed as precommitment; for instance, wanting to take revenge on people who have hurt you even if the revenge doesn’t get you anything.