>You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper.
Nah, I’m a frequent spouter of wrong things myself, so I’m not too surprised when other people make errors, especially when the stakes are low, etc.
Re 1,2: I guess a lot of this comes down to convention. People have found that one can productively discuss these things without always giving the formal models (in part because people in the field know how to translate everything into formal models). That said, if you want mathematical models of CDT and Newcomb-like decision problems, you can check the Savage or Jeffrey-Bolker formalizations. See, for example, the first few chapters of Arif Ahmed’s book, “Evidence, Decision and Causality”. Similarly, people in decision theory (and game theory) usually don’t specify what is common knowledge, because usually it is assumed (implicitly) that the entire problem description is common knowledge / known to the agent (Buyer). (Since this is decision and not game theory, it’s not quite clear what “common knowledge” means. But presumably to achieve 75% accuracy on the prediction, the seller needs to know that the buyer understands the problem...)
3: Yeah, *there exist* agent models under which everything becomes inconsistent, though IMO this just shows these agent models to be unimplementable. For example, take the problem description from my previous reply (where Seller just runs an exact copy of Buyer’s source code). Now assume that Buyer knows his source code and is logically omniscient. Then Buyer knows what his source code chooses and therefore knows the option that Seller is 75% likely to predict. So he will take the other option. But of course, this is a contradiction. As you’ll know, this is a pretty typical logical paradox of self-reference. But to me it just says that this logical omniscience assumption about the buyer is implausible and that we should consider agents who aren’t logically omniscient. Fortunately, CDT doesn’t assume knowledge of its own source code and such.
Perhaps one thing to help sell the plausibility of this working: For the purpose of the paper, the assumption that Buyer uses CDT in this scenario is pretty weak, formally simple and doesn’t have much to do with logic. It just says that the Buyer assigns some probability distribution over box states (i.e., some distribution over the mutually exclusive and collectively exhaustive s1=”money only in box 1“, s2= “money only in box 2”, s3=”money in both boxes”); and that given such distribution, Buyer takes an action that maximizes (causal) expected utility. So you could forget agents for a second and just prove the formal claim that for all probability distributions over three states s1, s2, s3, it is for i=1 or i=2 (or both) the case that (P(si)+P(s3))*$3 - $1 > 0. I assume you don’t find this strange/risky in terms of contradictions, but mathematically speaking, nothing more is really going on in the basic scenario.
The idea is that everyone agrees (hopefully) that orthodox CDT satisfies the assumption. (I.e., assigns some unconditional distribution, etc.) Of course, many CDTers would claim that CDT satisfies some *additional* assumptions, such as the probabilities being calibrated or “correct” in some other sense. But of course, if “A=>B”, then “A and C ⇒ B”. So adding assumptions cannot help the CDTer avoid the loss of money conclusion if they also accept the more basic assumptions. Of course, *some* added assumptions lead to contradictions. But that just means that they cannot be satisfied in the circumstances of this scenario if the more basic assumption is satisfied and if the premises of the Adversarial Offer help. So they would have to either adopt some non-orthodox CDT that doesn’t satisfy the basic assumption or require that their agents cannot be copied/predicted. (Both of which I also discuss in the paper.)
>you assume that Buyer knows the probabilities that Seller assigned to Buyer’s actions.
No, if this were the case, then I think you would indeed get contradictions, as you outline. So Buyer does *not* know what Seller’s prediction is. (He only knows her prediction is 75% accurate.) If Buyer uses CDT, then of course he assigns some (unconditional) probabilities to what the predictions are, but of course the problem description implies that these predictions aren’t particularly good. (Like: if he assigns 90% to the money in box 1, then it immediately follows that *no* money is in box 1.)
Sorry for taking some time to reply!
>You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper.
Nah, I’m a frequent spouter of wrong things myself, so I’m not too surprised when other people make errors, especially when the stakes are low, etc.
Re 1,2: I guess a lot of this comes down to convention. People have found that one can productively discuss these things without always giving the formal models (in part because people in the field know how to translate everything into formal models). That said, if you want mathematical models of CDT and Newcomb-like decision problems, you can check the Savage or Jeffrey-Bolker formalizations. See, for example, the first few chapters of Arif Ahmed’s book, “Evidence, Decision and Causality”. Similarly, people in decision theory (and game theory) usually don’t specify what is common knowledge, because usually it is assumed (implicitly) that the entire problem description is common knowledge / known to the agent (Buyer). (Since this is decision and not game theory, it’s not quite clear what “common knowledge” means. But presumably to achieve 75% accuracy on the prediction, the seller needs to know that the buyer understands the problem...)
3: Yeah, *there exist* agent models under which everything becomes inconsistent, though IMO this just shows these agent models to be unimplementable. For example, take the problem description from my previous reply (where Seller just runs an exact copy of Buyer’s source code). Now assume that Buyer knows his source code and is logically omniscient. Then Buyer knows what his source code chooses and therefore knows the option that Seller is 75% likely to predict. So he will take the other option. But of course, this is a contradiction. As you’ll know, this is a pretty typical logical paradox of self-reference. But to me it just says that this logical omniscience assumption about the buyer is implausible and that we should consider agents who aren’t logically omniscient. Fortunately, CDT doesn’t assume knowledge of its own source code and such.
Perhaps one thing to help sell the plausibility of this working: For the purpose of the paper, the assumption that Buyer uses CDT in this scenario is pretty weak, formally simple and doesn’t have much to do with logic. It just says that the Buyer assigns some probability distribution over box states (i.e., some distribution over the mutually exclusive and collectively exhaustive s1=”money only in box 1“, s2= “money only in box 2”, s3=”money in both boxes”); and that given such distribution, Buyer takes an action that maximizes (causal) expected utility. So you could forget agents for a second and just prove the formal claim that for all probability distributions over three states s1, s2, s3, it is for i=1 or i=2 (or both) the case that
(P(si)+P(s3))*$3 - $1 > 0.
I assume you don’t find this strange/risky in terms of contradictions, but mathematically speaking, nothing more is really going on in the basic scenario.
The idea is that everyone agrees (hopefully) that orthodox CDT satisfies the assumption. (I.e., assigns some unconditional distribution, etc.) Of course, many CDTers would claim that CDT satisfies some *additional* assumptions, such as the probabilities being calibrated or “correct” in some other sense. But of course, if “A=>B”, then “A and C ⇒ B”. So adding assumptions cannot help the CDTer avoid the loss of money conclusion if they also accept the more basic assumptions. Of course, *some* added assumptions lead to contradictions. But that just means that they cannot be satisfied in the circumstances of this scenario if the more basic assumption is satisfied and if the premises of the Adversarial Offer help. So they would have to either adopt some non-orthodox CDT that doesn’t satisfy the basic assumption or require that their agents cannot be copied/predicted. (Both of which I also discuss in the paper.)
>you assume that Buyer knows the probabilities that Seller assigned to Buyer’s actions.
No, if this were the case, then I think you would indeed get contradictions, as you outline. So Buyer does *not* know what Seller’s prediction is. (He only knows her prediction is 75% accurate.) If Buyer uses CDT, then of course he assigns some (unconditional) probabilities to what the predictions are, but of course the problem description implies that these predictions aren’t particularly good. (Like: if he assigns 90% to the money in box 1, then it immediately follows that *no* money is in box 1.)