ETA: Actually, insurance of any kind. I’ll explain with a simple dialogue:
“Hey, give me $1000.”
“WTF? Why?”
“What’s with the attitude? If I had found out your house was going to burn down in the next year, I was going to give you $300,000. Well, turns out it’s not. But if you weren’t willing to give me $1000 in every safe year, I wouldn’t plan on giving you $300,000 in the years your house burns down! Now, are you going to give me $1000, or are you going to condemn to poverty all of your copies living in worlds where your house burns down?”
If no one’s pointed out this insight before, and it really is an insight, please pass this analogy along.
If insurance companies predict that my house will burn down next year, they do not simulate me to determine whether I would have paid the $1,000 had they counterfactually mugged me. They just ask me to pay $1,000 every year, before my house potentially burns. This is a critical difference.
Or are insurance companies more psychic than I thought?
That’s not necessary for the parallel to work. In my post, the insurer is stating how things look from your side of the deal, in a way that shows the mapping to the counterfactual mugger. (And by the way, if insurers predict your house will burn down they don’t offer you a policy—not one as cheap as $1000. If they sell you one at all, they sell at a price equal to the payout, in which case it’s just shuffling money around.)
What creates a mapping to Newcomb’s problem (and by transition, the counterfactual mugging) is the inability to selectively set a policy so that it only applies at just the right time to benefit you. With a perfect predictor (Omega), you can’t “have a policy of one-boxing” yet conceal your intent to actually two-box.
This same dilemma arises in insurance, without having to assume a perfect, near-acausal predictor: you can’t “decide against buying insurance” and then make an exception over the time period where the disaster occurs. All that’s necessary for you to be in that situation is that you can’t predict the disaster significantly better than the insurer (assuming away for now the problems of insurance fraud and liability insurance, which introduce other considerations).
The analogy is that both situations use expected utility to make a decision. The principal difference is that when you are buying insurance, you do expect your house to burn to the ground, with a small probability. Counterfactual mugging suggests an improbable conclusion that the updated probability distribution is not what you should base your decisions on, and this aspect is clearly absent from normal insurance.
Counterfactual mugging suggests an improbable conclusion that the updated probability distribution is not what you should base your decisions on, and this aspect is clearly absent from normal insurance.
No, normal insurance has this aspect too: you don’t regret buying insurance once you learn the updated probability distribution, so you shouldn’t base your decision on it either.
you don’t regret buying insurance once you learn the updated probability distribution
I don’t regret it, because I remember that it was the best I could do given the information I had at the time. But if I knew when deciding whether or not to buy insurance that I would not be sued or become liable for large amounts, then I wouldn’t buy it. And I don’t want to change my decision algorithm to ignore information just because I didn’t have it some other time.
Where did I suggest that throwing away information is somehow optimal or of different optimality than in Newcomb’s problem or the counterfactual mugging?
I don’t know whether or not you intended to say that. But if you didn’t, then what did you mean by
normal insurance has this aspect too: you don’t regret buying insurance once you learn the updated probability distribution, so you shouldn’t base your decision on it either
To me that looks like you are saying that it doesn’t matter if you decide whether to buy insurance before or after you learn what will happen next year.
Please read it in the context of the comment I was replying to. Vladimir_Nesov was trying to show how my mapping of insurance to Newcomb didn’t carry over one important aspect, and my reply was that when you consistently carry over the mapping, it does.
That is the context that I read it in. He pointed out that counterfactual mugging is equivalent to insurance only if you fail to update on the information about which way the coin fell before deciding (not) to play. You responded that this made no difference because you didn’t regret buying insurance a year later (when you have the information but don’t get to reverse the purchase).
I guess I should have asked for clarification on what he meant by the “improbable conclusion” that the counterfactual mugging suggests. I thought he meant that the possibility of being counterfactually mugged implies the conclusion that you should pre-commit to paying the mugger, and not change your action based upon finding that you were on the losing side.
If that’s not the case, we’re starting from different premises.
In any case, I think the salient aspect is the same between the two cases: it is optimal to precommit to paying, even if it seems like being able to change course later would make you better off.
That’s not really a counterfactual mugging though, is it? ie, it doesn’t fit the template of “I decided to flip a fair coin give you ten dollars if it came up heads if I predicted (and I’m really good at predicting) that if it came up tails you would give me five dollars. I flipped the coin yesterday and it came up tails. So… do you give me five dollars?”
EDIT: to respond to your edit… what insurance company would actually do that? ie, you first have to sign up with them, etc etc. And if their actuaries compute that there’s a reasonable likelihood that they’ll have to pay out to you… They avoid you or give you nastier premiums or such in the first place. I guess I could see an argument that insurance could be viewed as RELATED to something like an iterated counterfactual mugging, though...
There are a few ways you can look at this to make it seem more relevant. I think you can transform the counterfactual mugging into insurance such that each step shouldn’t change your answer.
But let me put it this way instead: Imagine that you’re going to insert your consciousness into a random “you” across the multiverse. In some of those your house (or other valuable) burns down (or otherwise descends into entropy). Would you rather be thrown into a “you” who had bought insurance, or hadn’t bought insurance?
Remember, it’s not an option to only buy insurance in the ones where your house burns down, i.e. to separate the “yous” into a) those whose houses didn’t burn down and didn’t buy insurance, vs. b) those whose houses did burn down and did buy insurance. This inseparability, I think, captures the salient aspects of the counterfactual mugging because it’s (presumably) not an option to “be the type to pay the mugger” only in those cases where the coin flip favors you.
(I daydreamed once about some guy whose house experiences a natural disaster, so he goes to an insurance company with which he has no policy, and when it’s explained to him that they only pay out to people who have a policy with them, he rolls his eyes and tries to give them money equal to a month’s premium, as if that will somehow make them pay out.)
Homeowner’s insurance.
ETA: Actually, insurance of any kind. I’ll explain with a simple dialogue:
“Hey, give me $1000.”
“WTF? Why?”
“What’s with the attitude? If I had found out your house was going to burn down in the next year, I was going to give you $300,000. Well, turns out it’s not. But if you weren’t willing to give me $1000 in every safe year, I wouldn’t plan on giving you $300,000 in the years your house burns down! Now, are you going to give me $1000, or are you going to condemn to poverty all of your copies living in worlds where your house burns down?”
If no one’s pointed out this insight before, and it really is an insight, please pass this analogy along.
If insurance companies predict that my house will burn down next year, they do not simulate me to determine whether I would have paid the $1,000 had they counterfactually mugged me. They just ask me to pay $1,000 every year, before my house potentially burns. This is a critical difference.
Or are insurance companies more psychic than I thought?
That’s not necessary for the parallel to work. In my post, the insurer is stating how things look from your side of the deal, in a way that shows the mapping to the counterfactual mugger. (And by the way, if insurers predict your house will burn down they don’t offer you a policy—not one as cheap as $1000. If they sell you one at all, they sell at a price equal to the payout, in which case it’s just shuffling money around.)
What creates a mapping to Newcomb’s problem (and by transition, the counterfactual mugging) is the inability to selectively set a policy so that it only applies at just the right time to benefit you. With a perfect predictor (Omega), you can’t “have a policy of one-boxing” yet conceal your intent to actually two-box.
This same dilemma arises in insurance, without having to assume a perfect, near-acausal predictor: you can’t “decide against buying insurance” and then make an exception over the time period where the disaster occurs. All that’s necessary for you to be in that situation is that you can’t predict the disaster significantly better than the insurer (assuming away for now the problems of insurance fraud and liability insurance, which introduce other considerations).
I see your point.
The analogy is that both situations use expected utility to make a decision. The principal difference is that when you are buying insurance, you do expect your house to burn to the ground, with a small probability. Counterfactual mugging suggests an improbable conclusion that the updated probability distribution is not what you should base your decisions on, and this aspect is clearly absent from normal insurance.
No, normal insurance has this aspect too: you don’t regret buying insurance once you learn the updated probability distribution, so you shouldn’t base your decision on it either.
I don’t regret it, because I remember that it was the best I could do given the information I had at the time. But if I knew when deciding whether or not to buy insurance that I would not be sued or become liable for large amounts, then I wouldn’t buy it. And I don’t want to change my decision algorithm to ignore information just because I didn’t have it some other time.
Where did I suggest that throwing away information is somehow optimal or of different optimality than in Newcomb’s problem or the counterfactual mugging?
I don’t know whether or not you intended to say that. But if you didn’t, then what did you mean by
To me that looks like you are saying that it doesn’t matter if you decide whether to buy insurance before or after you learn what will happen next year.
Please read it in the context of the comment I was replying to. Vladimir_Nesov was trying to show how my mapping of insurance to Newcomb didn’t carry over one important aspect, and my reply was that when you consistently carry over the mapping, it does.
That is the context that I read it in. He pointed out that counterfactual mugging is equivalent to insurance only if you fail to update on the information about which way the coin fell before deciding (not) to play. You responded that this made no difference because you didn’t regret buying insurance a year later (when you have the information but don’t get to reverse the purchase).
I guess I should have asked for clarification on what he meant by the “improbable conclusion” that the counterfactual mugging suggests. I thought he meant that the possibility of being counterfactually mugged implies the conclusion that you should pre-commit to paying the mugger, and not change your action based upon finding that you were on the losing side.
If that’s not the case, we’re starting from different premises.
In any case, I think the salient aspect is the same between the two cases: it is optimal to precommit to paying, even if it seems like being able to change course later would make you better off.
That’s not really a counterfactual mugging though, is it? ie, it doesn’t fit the template of “I decided to flip a fair coin give you ten dollars if it came up heads if I predicted (and I’m really good at predicting) that if it came up tails you would give me five dollars. I flipped the coin yesterday and it came up tails. So… do you give me five dollars?”
EDIT: to respond to your edit… what insurance company would actually do that? ie, you first have to sign up with them, etc etc. And if their actuaries compute that there’s a reasonable likelihood that they’ll have to pay out to you… They avoid you or give you nastier premiums or such in the first place. I guess I could see an argument that insurance could be viewed as RELATED to something like an iterated counterfactual mugging, though...
There are a few ways you can look at this to make it seem more relevant. I think you can transform the counterfactual mugging into insurance such that each step shouldn’t change your answer.
But let me put it this way instead: Imagine that you’re going to insert your consciousness into a random “you” across the multiverse. In some of those your house (or other valuable) burns down (or otherwise descends into entropy). Would you rather be thrown into a “you” who had bought insurance, or hadn’t bought insurance?
Remember, it’s not an option to only buy insurance in the ones where your house burns down, i.e. to separate the “yous” into a) those whose houses didn’t burn down and didn’t buy insurance, vs. b) those whose houses did burn down and did buy insurance. This inseparability, I think, captures the salient aspects of the counterfactual mugging because it’s (presumably) not an option to “be the type to pay the mugger” only in those cases where the coin flip favors you.
(I daydreamed once about some guy whose house experiences a natural disaster, so he goes to an insurance company with which he has no policy, and when it’s explained to him that they only pay out to people who have a policy with them, he rolls his eyes and tries to give them money equal to a month’s premium, as if that will somehow make them pay out.)