I mean what I’m saying, by the way. As long as you’re OK with assuming you can have a logical prior in the first place, I don’t see any issue with representing LCM with the diagram I made.
Yes, it means the LCM is no different to the CM, but I don’t see an issue with that. Apart from the question of involving different kinds of priors (logical vs non-logical) the two problems are indeed identical.
If I’m missing something, please tell me what it is; I’d like to know!
I agree that your diagram gives the right answer to logical Counterfactual Mugging. The problem is that it’s not formal enough, because you don’t really explain what a “logical prior” is. For example, if we have logical Counterfactual Mugging based on a digit of pi, then one of the two possible worlds is logically inconsistent. How do we know that calculating the digit of pi by a different method will give the same result in that world, rather than blow up the calculator or something? And once you give a precise definition of “logical prior”, the problems begin to look more like programs or logical formulas than causal diagrams.
That’s fair enough; the “logical prior” is definitely a relatively big assumption, although it’s very hard to justify anything other than a 50⁄50 split between the two possibilities.
However, LCM only takes place in one of the two possible worlds (the real one); the other never happens. Either way you’re calculating the digit of pi in this world, it’s just that in one of the two possibilities (which, as far as you know are equal) you are the subject of logical counterfactual surgery by Omega. Assuming this is the case, surely calculating the digit of pi isn’t going to help?
From the point of view of your decision-making algorithm, not knowing which of the two inputs it’s actually receiving (counterfactual vs real) it outputs “GIVE”. Moreover, it chooses “GIVE” not merely for counterfactual utilons, but for real ones.
Of course, we’ve assumed that the logical counterfactual surgery Omega does is a coherent concept to begin with. The whole point of Omega is that Omega gets the benefit of the doubt, but in this situation it’s definitely still worthwhile to ask whether it makes sense.
In particular, maybe it’s possible to make a logical counterfactual surgery detector that is robust even against Omega. If you can do that, then you win regardless of which way the logical coin came up. I don’t think trying to calculate the relevant digit of pi is good enough, though.
Here’s an idea for a “logical counterfactual surgery detector”: Run a sandboxed version of your proof engine that attempts to maximally entangle that digit of pi with other logical facts. For example, it might prove that “if the 10000th decimal digit of pi is 8, then ⊥”.
If you detect that the sandboxed proof engine undergoes a logical explosion, then GIVE. Otherwise, REFUSE.
Agreed. I was initially hesitant because you said “the model doesn’t have enough detail” so I was checking to see if there was a response along the lines of “the model would necessarily fail to represent X” that you would bring up. Anyways, I have in fact discovered a truly marvelous model, but the margin is too small to contain it...
For the purposes of the model itself you should, of course, ignore all of the suggestively named LISP tokens, except the labels (ALLCAPS) on the outgoing edges of nodes in the same information set (as these tell you which actions are “the same action”). In other words, the actual “model” consists only of: (1) the nodes (2) the directed edges (3) the information set arrows (an equivalence relation) (4) the identities of arrows out of the same information set (i.e. an equivalence relation on those arrows). (5) the probabilities associated with a chance node (6) the values inside the square boxes (utilities)
In the game as I’ve presented it, the optimal strategy is clearly to:
always GIVE when Omega asks you
if for some reason you happened to CALCULATE, you still GIVE after that (also I think you would be well-advised to run a full re-check of all of your AI subsystems to work out why you decided to CALCULATE. Hopefully it was just a cosmic ray ^_~)
I think I see a reasonable way to represent the Logical Counterfactual Mugging in the extensive form. I can draw a diagram if it’s desirable.
Well, either that or an explanation. I can’t just figure it out from your comment.
I mean what I’m saying, by the way. As long as you’re OK with assuming you can have a logical prior in the first place, I don’t see any issue with representing LCM with the diagram I made.
Yes, it means the LCM is no different to the CM, but I don’t see an issue with that. Apart from the question of involving different kinds of priors (logical vs non-logical) the two problems are indeed identical.
If I’m missing something, please tell me what it is; I’d like to know!
I agree that your diagram gives the right answer to logical Counterfactual Mugging. The problem is that it’s not formal enough, because you don’t really explain what a “logical prior” is. For example, if we have logical Counterfactual Mugging based on a digit of pi, then one of the two possible worlds is logically inconsistent. How do we know that calculating the digit of pi by a different method will give the same result in that world, rather than blow up the calculator or something? And once you give a precise definition of “logical prior”, the problems begin to look more like programs or logical formulas than causal diagrams.
That’s fair enough; the “logical prior” is definitely a relatively big assumption, although it’s very hard to justify anything other than a 50⁄50 split between the two possibilities.
However, LCM only takes place in one of the two possible worlds (the real one); the other never happens. Either way you’re calculating the digit of pi in this world, it’s just that in one of the two possibilities (which, as far as you know are equal) you are the subject of logical counterfactual surgery by Omega. Assuming this is the case, surely calculating the digit of pi isn’t going to help?
From the point of view of your decision-making algorithm, not knowing which of the two inputs it’s actually receiving (counterfactual vs real) it outputs “GIVE”. Moreover, it chooses “GIVE” not merely for counterfactual utilons, but for real ones.
Of course, we’ve assumed that the logical counterfactual surgery Omega does is a coherent concept to begin with. The whole point of Omega is that Omega gets the benefit of the doubt, but in this situation it’s definitely still worthwhile to ask whether it makes sense.
In particular, maybe it’s possible to make a logical counterfactual surgery detector that is robust even against Omega. If you can do that, then you win regardless of which way the logical coin came up. I don’t think trying to calculate the relevant digit of pi is good enough, though.
Here’s an idea for a “logical counterfactual surgery detector”:
Run a sandboxed version of your proof engine that attempts to maximally entangle that digit of pi with other logical facts. For example, it might prove that “if the 10000th decimal digit of pi is 8, then ⊥”. If you detect that the sandboxed proof engine undergoes a logical explosion, then GIVE. Otherwise, REFUSE.
Agreed. I was initially hesitant because you said “the model doesn’t have enough detail” so I was checking to see if there was a response along the lines of “the model would necessarily fail to represent X” that you would bring up. Anyways, I have in fact discovered a truly marvelous model, but the margin is too small to contain it...
Fortunately, though, URL links are big enough to fit in the margin! http://www.gliffy.com/go/publish/6135045
For the purposes of the model itself you should, of course, ignore all of the suggestively named LISP tokens, except the labels (ALLCAPS) on the outgoing edges of nodes in the same information set (as these tell you which actions are “the same action”). In other words, the actual “model” consists only of:
(1) the nodes
(2) the directed edges
(3) the information set arrows (an equivalence relation)
(4) the identities of arrows out of the same information set (i.e. an equivalence relation on those arrows).
(5) the probabilities associated with a chance node
(6) the values inside the square boxes (utilities)
In the game as I’ve presented it, the optimal strategy is clearly to:
always GIVE when Omega asks you
if for some reason you happened to CALCULATE, you still GIVE after that (also I think you would be well-advised to run a full re-check of all of your AI subsystems to work out why you decided to CALCULATE. Hopefully it was just a cosmic ray ^_~)