Does this particular thought experiment really have any practical application?
I can think of plenty of similar scenarios that are genuinely useful and worth considering, but all of them can be expressed with much simpler and more intuitive scenarios—eg when the offer will/might be repeated, or when you get to choose in advance whether to flip the coin and win 10000/lose 100. But with the scenario as stated—what real phenomenon is there that would reward you for being willing to counterfactually take an otherwise-detrimental action for no reason other than qualifying for the counterfactual reward? Even if we decide the best course of action in this contrived scenario—therefore what?
Precommitments are used in decision-theoretic problems. Some people have proposed that a good decision theory should take the action that it would have precommitted to, if it had known in advance to do such a thing. This is an attempt to examine the consequences of that.
This is an attempt to examine the consequences of that.
Yes, but if the artificial scenario doesn’t reflect anything in the real world, then even if we get the right answer, therefore what? It’s like being vaccinated against a fictitious disease; even if you successfully develop the antibodies, what good do they do?
It seems to me that the “beggars and gods” variant mentioned earlier in the comments, where the opportunity repeats itself each day, is actually a more useful study. Sure, it’s much more intuitive; it doesn’t tie our brains up in knots, trying to work out a way to intend to do something at a point when all our motivation to do so has evaporated. But reality doesn’t have to be complicated. Sometimes you just have to learn to throw in the pebble.
Decision theory is an attempt to formalize the human decision process. The point isn’t that we really are unsure whether you should leave people to die of thirst, but how we can encode that in an actual decision theory. Like so many discussions on Less Wrong, this implicitly comes back to AI design: an AI needs a decision theory, and that decision theory needs to not have major failure modes, or at least the failure modes should be well-understood.
If your AI somehow assigns a nonzero probability to “I will face a massive penalty unless I do this really weird action”, that ideally shouldn’t derail its entire decision process.
The beggars-and-gods formulation is the same problem. “Omega” is just a handy abstraction for “don’t focus on how you got into this decision-theoretic situation”. Admittedly, this abstraction sometimes obscures the issue.
The beggars-and-gods formulation is the same problem.
I don’t think so; I think the element of repetition substantially alters it—but in a good way, one that makes it more useful in designing a real-world agent. Because in reality, we want to design decision theories that will solve problems multiple times.
At the point of meeting a beggar, although my prospects of obtaining a gold coin this time around are gone, nonetheless my overall commitment is not meaningless. I can still think, “I want to be the kind of person who gives pennies to beggars, because overall I will come out ahead”, and this thought remains applicable. I know that I can average out my losses with greater wins, and so I still want to stick to the algorithm.
In the single-shot scenario, however, my commitment becomes worthless once the coin comes down tails. There will never be any more 10K; there is no motivation any more to give 100. Following my precommitment, unless it is externally enforced, no longer makes any sense.
There will never be any more 10K; there is no motivation any more to give 100. Following my precommitment, unless it is externally enforced, no longer makes any sense.
This is the point of the thought experiment.
Omega is a predictor. His actions aren’t just based on what you decide, but on what he predicts that you will decide.
If your decision theory says “nah, I’m not paying you” when you aren’t given advance warning or repeated trials, then that is a fact about your decision theory even before Omega flips his coin. He flips his coin, gets heads, examines your decision theory, and gives you no money.
But if your decision theory pays up, then if he flips tails, you pay $100 for no possible benefit.
Neither of these seems entirely satisfactory. Is this a reasonable feature for a decision theory to have? Or is it pathological? If it’s pathological, how do we fix it without creating other pathologies?
if your decision theory pays up, then if he flips tails, you pay $100 for no possible benefit.
But in the single-shot scenario, after it comes down tails, what motivation does an ideal game theorist have to stick to the decision theory?
Like Parfit’s hitchhiker, although in advance you might agree that it’s a worthwhile deal, when it comes to the point of actually paying up, your motivation is gone, unless you have bound yourself in some other way.
But in the single-shot scenario, after it comes down tails, what motivation does an ideal game theorist have to stick to the decision theory?
That’s what the problem is asking!
This is a decision-theoretical problem. Nobody cares about it for immediate practical purpose. “Stick to your decision theory, except when you non-rigorously decide not to” isn’t a resolution to the problem, any more than “ignore the calculations since they’re wrong” was a resolution to the ultraviolet catastrophe.
Again, the point of this experiment is that we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment. The original motivation is almost certainly in the context of AI design, where you don’t HAVE a human homunculus implementing a decision theory, the agent just is its decision theory.
we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment
Well, if we’re designing an AI now, then we have the capability to make a binding precommitment, simply by writing code. And we are still in a position where we can hope for the coin to come down heads. So yes, in that privileged position, we should bind the AI to pay up.
However, to the question as stated, “is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?” I would still answer, “No, you don’t achieve your goals/utility by paying up.” We’re specifically told that the coin has already been flipped. Losing $100 has negative utility, and positive utility isn’t on the table.
Alternatively, since it’s asking specifically about the decision, I would answer, If you haven’t made the decision until after the coin comes down tails, then paying is the wrong decision. Only if you’re deciding in advance (when you still hope for heads) can a decision to pay have the best expected value.
Even if deciding in advance, though, it’s still not a guaranteed win, but rather a gamble. So I don’t see any inconsistency in saying, on the one hand, “You should make a binding precommitment to pay”, and on the other hand, “If the coin has already come down tails without a precommitment, you shouldn’t pay.”
If there were a lottery where the expected value of a ticket was actually positive, and someone comes to you offering to sell you their ticket (at cost price), then it would make sense in advance to buy it, but if you didn’t, and then the winners were announced and that ticket didn’t win, then buying it no longer makes sense.
You’re fundamentally failing to address the problem.
For one, your examples just plain omit the “Omega is a predictor” part, which is key to the situation. Since Omega is a predictor, there is no distinction between making the decision ahead of time or not.
For another, unless you can prove that your proposed alternative doesn’t have pathologies just as bad as the Counterfactual Mugging, you’re at best back to square one.
It’s very easy to say “look, just don’t do the pathological thing”. It’s very hard to formalize that into an actual decision theory, without creating new pathologies. I feel obnoxious to keep repeating this, but that is the entire problem in the first place.
there is no distinction between making the decision ahead of time or not
Except that even if you make the decision, what would motivate you to stick to it once it can no longer pay up?
Your only motivation to pay is the hope of obtaining the $10000. If that hope does not exist, what reason would you have to abide by the decision that you make now?
Your decision is a result of your decision theory, and your decision theory is a fact about you, not just something that happens in that moment.
You can say—I’m not making the decision ahead of time, I’m waiting until after I see that Omega has flipped tails. In which case, when Omega predicts your behavior ahead of time, he predicts that you won’t decide until after the coin flip, resulting in hypothetically refusing to pay given tails, so—although the coin flip hasn’t happened yet and could still come up heads—your yet-unmade decision has the same effect as if you had loudly precommitted to it.
You’re trying to reason in temporal order, but that doesn’t work in the presence of predictors.
I get that that could work for a computer, because a computer can be bound by an overall decision theory without attempting to think about whether that decision theory still makes sense in the current situation.
I don’t mind predictors in eg Newcomb’s problem. Effectively, there is a backward causal arrow, because whatever you choose causes the predictor to have already acted differently. Unusual, but reasonable.
However, in this case, yes, your choice affects the predictor’s earlier decision—but since the coin never came down heads, who cares any more how the predictor would have acted? Why care about being the kind of person who will pay the counterfactual mugger, if there will never again be any opportunity for it to pay off?
If you want the payoff, you have to be the kind of person who will pay the counterfactual mugger, even once you no longer can benefit from doing so. Is that a reasonable feature for a decision theory to have? It’s not clear that it is; it seems strange to pay out, even though the expected value of becoming that kind of person is clearly positive before you see the coin. That’s what the counterfactual mugging is about.
If you’re asking “why care” rhetorically, and you believe the answer is “you shouldn’t be that kind of person”, then your decision theory prefers lower expected values, which is also pathological. How do you resolve that tension? This is, once again, literally the entire problem.
Well, as previously stated, my view is that the scenario as stated (single-shot with no precommitment) is not the most helpful hypothetical for designing a decision theory. An iterated version would actually be more relevant, since we want to design an AI that can make more than one decision. And in the iterated version, the tension is largely resolved, because there is a clear motivation to stick with the decision: we still hope for the next coin to come down heads.
Are you actually trying to understand? At some point you’ll predictably approach death, and predictably assign a vanishing probability to another offer or coin-flip coming after a certain point. Your present self should know this. Omega knows it by assumption.
I’m pretty sure that decision theories are not designed on that basis. We don’t want an AI to start making different decisions based on the probability of an upcoming decommission. We don’t want it to become nihilistic and stop making decisions because it predicted the heat death of the universe and decided that all paths have zero value. If death is actually tied to the decision in some way, then sure, take that into account, but otherwise, I don’t think a decision theory should have “death is inevitably coming for us all” as a factor.
I’m pretty sure that decision theories are not designed on that basis.
You are wrong. In fact, this is a totally standard thing to consider, and “avoid back-chaining defection in games of fixed length” is a known problem, with various known strategies.
So say it’s repeated. Since our observable universe will end someday, there will come a time when the probability of future flips is too low to justify paying if the coin lands tails. Your argument suggests you won’t pay, and by assumption Omega knows you won’t pay. But then on the previous trial you have no incentive to pay, since you can’t fool Omega about your future behavior. This makes it seem like non-payment propagates backward, and you miss out on the whole sequence.
I wouldn’t trust myself to accurately predict the odds of another repetition, so I don’t think it would unravel for me. But this comes back to my earlier point that you really need some external motivation, some precommitment, because “I want the 10K” loses its power as soon as the coin comes down tails.
Does this particular thought experiment really have any practical application?
I can think of plenty of similar scenarios that are genuinely useful and worth considering, but all of them can be expressed with much simpler and more intuitive scenarios—eg when the offer will/might be repeated, or when you get to choose in advance whether to flip the coin and win 10000/lose 100. But with the scenario as stated—what real phenomenon is there that would reward you for being willing to counterfactually take an otherwise-detrimental action for no reason other than qualifying for the counterfactual reward? Even if we decide the best course of action in this contrived scenario—therefore what?
Precommitments are used in decision-theoretic problems. Some people have proposed that a good decision theory should take the action that it would have precommitted to, if it had known in advance to do such a thing. This is an attempt to examine the consequences of that.
Yes, but if the artificial scenario doesn’t reflect anything in the real world, then even if we get the right answer, therefore what? It’s like being vaccinated against a fictitious disease; even if you successfully develop the antibodies, what good do they do?
It seems to me that the “beggars and gods” variant mentioned earlier in the comments, where the opportunity repeats itself each day, is actually a more useful study. Sure, it’s much more intuitive; it doesn’t tie our brains up in knots, trying to work out a way to intend to do something at a point when all our motivation to do so has evaporated. But reality doesn’t have to be complicated. Sometimes you just have to learn to throw in the pebble.
Decision theory is an attempt to formalize the human decision process. The point isn’t that we really are unsure whether you should leave people to die of thirst, but how we can encode that in an actual decision theory. Like so many discussions on Less Wrong, this implicitly comes back to AI design: an AI needs a decision theory, and that decision theory needs to not have major failure modes, or at least the failure modes should be well-understood.
If your AI somehow assigns a nonzero probability to “I will face a massive penalty unless I do this really weird action”, that ideally shouldn’t derail its entire decision process.
The beggars-and-gods formulation is the same problem. “Omega” is just a handy abstraction for “don’t focus on how you got into this decision-theoretic situation”. Admittedly, this abstraction sometimes obscures the issue.
I don’t think so; I think the element of repetition substantially alters it—but in a good way, one that makes it more useful in designing a real-world agent. Because in reality, we want to design decision theories that will solve problems multiple times.
At the point of meeting a beggar, although my prospects of obtaining a gold coin this time around are gone, nonetheless my overall commitment is not meaningless. I can still think, “I want to be the kind of person who gives pennies to beggars, because overall I will come out ahead”, and this thought remains applicable. I know that I can average out my losses with greater wins, and so I still want to stick to the algorithm.
In the single-shot scenario, however, my commitment becomes worthless once the coin comes down tails. There will never be any more 10K; there is no motivation any more to give 100. Following my precommitment, unless it is externally enforced, no longer makes any sense.
So the scenarios are significantly different.
This is the point of the thought experiment.
Omega is a predictor. His actions aren’t just based on what you decide, but on what he predicts that you will decide.
If your decision theory says “nah, I’m not paying you” when you aren’t given advance warning or repeated trials, then that is a fact about your decision theory even before Omega flips his coin. He flips his coin, gets heads, examines your decision theory, and gives you no money.
But if your decision theory pays up, then if he flips tails, you pay $100 for no possible benefit.
Neither of these seems entirely satisfactory. Is this a reasonable feature for a decision theory to have? Or is it pathological? If it’s pathological, how do we fix it without creating other pathologies?
But in the single-shot scenario, after it comes down tails, what motivation does an ideal game theorist have to stick to the decision theory?
Like Parfit’s hitchhiker, although in advance you might agree that it’s a worthwhile deal, when it comes to the point of actually paying up, your motivation is gone, unless you have bound yourself in some other way.
That’s what the problem is asking!
This is a decision-theoretical problem. Nobody cares about it for immediate practical purpose. “Stick to your decision theory, except when you non-rigorously decide not to” isn’t a resolution to the problem, any more than “ignore the calculations since they’re wrong” was a resolution to the ultraviolet catastrophe.
Again, the point of this experiment is that we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment. The original motivation is almost certainly in the context of AI design, where you don’t HAVE a human homunculus implementing a decision theory, the agent just is its decision theory.
Well, if we’re designing an AI now, then we have the capability to make a binding precommitment, simply by writing code. And we are still in a position where we can hope for the coin to come down heads. So yes, in that privileged position, we should bind the AI to pay up.
However, to the question as stated, “is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?” I would still answer, “No, you don’t achieve your goals/utility by paying up.” We’re specifically told that the coin has already been flipped. Losing $100 has negative utility, and positive utility isn’t on the table.
Alternatively, since it’s asking specifically about the decision, I would answer, If you haven’t made the decision until after the coin comes down tails, then paying is the wrong decision. Only if you’re deciding in advance (when you still hope for heads) can a decision to pay have the best expected value.
Even if deciding in advance, though, it’s still not a guaranteed win, but rather a gamble. So I don’t see any inconsistency in saying, on the one hand, “You should make a binding precommitment to pay”, and on the other hand, “If the coin has already come down tails without a precommitment, you shouldn’t pay.”
If there were a lottery where the expected value of a ticket was actually positive, and someone comes to you offering to sell you their ticket (at cost price), then it would make sense in advance to buy it, but if you didn’t, and then the winners were announced and that ticket didn’t win, then buying it no longer makes sense.
You’re fundamentally failing to address the problem.
For one, your examples just plain omit the “Omega is a predictor” part, which is key to the situation. Since Omega is a predictor, there is no distinction between making the decision ahead of time or not.
For another, unless you can prove that your proposed alternative doesn’t have pathologies just as bad as the Counterfactual Mugging, you’re at best back to square one.
It’s very easy to say “look, just don’t do the pathological thing”. It’s very hard to formalize that into an actual decision theory, without creating new pathologies. I feel obnoxious to keep repeating this, but that is the entire problem in the first place.
Except that even if you make the decision, what would motivate you to stick to it once it can no longer pay up?
Your only motivation to pay is the hope of obtaining the $10000. If that hope does not exist, what reason would you have to abide by the decision that you make now?
Your decision is a result of your decision theory, and your decision theory is a fact about you, not just something that happens in that moment.
You can say—I’m not making the decision ahead of time, I’m waiting until after I see that Omega has flipped tails. In which case, when Omega predicts your behavior ahead of time, he predicts that you won’t decide until after the coin flip, resulting in hypothetically refusing to pay given tails, so—although the coin flip hasn’t happened yet and could still come up heads—your yet-unmade decision has the same effect as if you had loudly precommitted to it.
You’re trying to reason in temporal order, but that doesn’t work in the presence of predictors.
I get that that could work for a computer, because a computer can be bound by an overall decision theory without attempting to think about whether that decision theory still makes sense in the current situation.
I don’t mind predictors in eg Newcomb’s problem. Effectively, there is a backward causal arrow, because whatever you choose causes the predictor to have already acted differently. Unusual, but reasonable.
However, in this case, yes, your choice affects the predictor’s earlier decision—but since the coin never came down heads, who cares any more how the predictor would have acted? Why care about being the kind of person who will pay the counterfactual mugger, if there will never again be any opportunity for it to pay off?
Yes, that is the problem in question!
If you want the payoff, you have to be the kind of person who will pay the counterfactual mugger, even once you no longer can benefit from doing so. Is that a reasonable feature for a decision theory to have? It’s not clear that it is; it seems strange to pay out, even though the expected value of becoming that kind of person is clearly positive before you see the coin. That’s what the counterfactual mugging is about.
If you’re asking “why care” rhetorically, and you believe the answer is “you shouldn’t be that kind of person”, then your decision theory prefers lower expected values, which is also pathological. How do you resolve that tension? This is, once again, literally the entire problem.
Well, as previously stated, my view is that the scenario as stated (single-shot with no precommitment) is not the most helpful hypothetical for designing a decision theory. An iterated version would actually be more relevant, since we want to design an AI that can make more than one decision. And in the iterated version, the tension is largely resolved, because there is a clear motivation to stick with the decision: we still hope for the next coin to come down heads.
Are you actually trying to understand? At some point you’ll predictably approach death, and predictably assign a vanishing probability to another offer or coin-flip coming after a certain point. Your present self should know this. Omega knows it by assumption.
I’m pretty sure that decision theories are not designed on that basis. We don’t want an AI to start making different decisions based on the probability of an upcoming decommission. We don’t want it to become nihilistic and stop making decisions because it predicted the heat death of the universe and decided that all paths have zero value. If death is actually tied to the decision in some way, then sure, take that into account, but otherwise, I don’t think a decision theory should have “death is inevitably coming for us all” as a factor.
You are wrong. In fact, this is a totally standard thing to consider, and “avoid back-chaining defection in games of fixed length” is a known problem, with various known strategies.
So say it’s repeated. Since our observable universe will end someday, there will come a time when the probability of future flips is too low to justify paying if the coin lands tails. Your argument suggests you won’t pay, and by assumption Omega knows you won’t pay. But then on the previous trial you have no incentive to pay, since you can’t fool Omega about your future behavior. This makes it seem like non-payment propagates backward, and you miss out on the whole sequence.
I wouldn’t trust myself to accurately predict the odds of another repetition, so I don’t think it would unravel for me. But this comes back to my earlier point that you really need some external motivation, some precommitment, because “I want the 10K” loses its power as soon as the coin comes down tails.