So, according to your definition of “good news” and “bad news”, it might be bad news to find that you’ve made a good decision, and good news to find that you’ve made a bad decision? Why would a rational agent want to have such a concept of good and bad news?
So, according to your definition of “good news” and “bad news”, it might be bad news to find that you’ve made a good decision, and good news to find that you’ve made a bad decision? Why would a rational agent want to have such a concept of good and bad news?
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Let us suppose that, when you were drunk, you were computationally limited in a way that made playing the lottery (seem to be) the best decision given your computational limitations. Now, in your sober state, you are more computationally powerful, and you can see that playing the lottery last night was a bad decision (given your current computational power but minus the knowledge that your numbers would win). Nonetheless, learning that you played and won is good news. After all, maybe you can use your winnings to become even more computationally powerful, so that you don’t make such bad decisions in the future.
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
Whether these copies exist or not, and their measure could depend on details of the lotteries’ implementation. If it’s a classical lottery, all the (reasonable) quantum branches from the point you decided could have the same numbers.
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
I want to be careful to distinguish Many-Worlds (MW) branches from theoretical possibilities (with respect to my best theory). Events in MW-branches actually happen. Theoretical possibilities, however, may not. (I say this to clarify my position, which I know differs from yours. I am not here justifying these claims.)
My thought experiment was supposed to be about theoretical possibility, not about what happens in some MW-branches but not others.
But I’ll recast the situation in terms MW-branches, because this is analogous to your scenario in your link. All of the MW-branches very probably exist, and I agree that I ought to care about them without regard to which one “I” am or will be subjectively experiencing.
So, if learning that I played and won the lottery in “my” MW-branch doesn’t significantly change my expectation of the measures of MW-branches in which I play or win, then it is neither good news nor bad news.
However, as wnoise points out, some theoretical possibilities may happen in practically no MW-branches.
This brings us to theoretical possibilities. What are my expected measures of MW-branches in which I play and in which I win? If I learn news N that revises my expected measures in the right way, so that the total utility of all branches is greater, then N is good news. This is the kind of news that I was talking about, news that changes my expectations of which of the various theoretical possibilities are in fact realized.
This is the point at which believing in many worlds and caring about other branches leads to very suspicious way to perceive reality. I know, absurdity heuristic isn’t that much reliable, but still—would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years?
I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years?
Caring about other branches doesn’t imply having congruent emotional reactions to beliefs about them. Emotions aren’t preferences.
Emotions are not preferences, but I believe they can’t be completely disentangled. There is something wrong with a person who feels unhappy after learning that the world has changed towards his/her prefered state.
I don’t see how you can effectively apply social standards like “something wrong” to a mind that implements UDT. There are no human minds or non-human minds that I am aware of that perfectly implement UDT. There are no known societies of beings that do. It stands to reason that such a society would seem very other if judged by the social standards of a society composed of standard human minds.
When discussing UDT outcomes you have to work around that part of you that wants to immediately “correct” the outcome by applying non-UDT reasoning.
That “something wrong” was not as much of a social standard, as rather an expression of an intuitive feeling of a contradiction, which I wasn’t able to specify more explicitly. I could anticipate general objections such as yours, however, it would help if you can be more concrete here. The question is whether one can say he prefers the state of world where he dies soon with 99% probability, even if he would be in fact disappointed after realising that it was really going to happen. I think we are now at risk of redefining few words (like preference) to mean something quite different from what they used to mean, which I don’t find good at all.
And by the way, why is this a question of decision theory? There is no decision in the discussed scenario, only a question whether some news can be considered good or bad.
I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
I don’t know if this is exactly the kind of thing you’re looking for, but you might like this paper arguing for why many-worlds doesn’t imply quantum immortality and like-minded conclusions based on jumping between branches. (I saw someone cite this a few days ago somewhere on Less Wrong, and I’d give them props here, but can’t remember who they were!)
You’re probably right—going through Mallah’s comment history, I think it might have been this post of his that turned me on to his paper. Thanks Mallah!
Wei_Dai is saying that all the other copies of you that didn’t win lost more than enough utility to make up for it. This is far from a universally accepted utility measure, of course.
Had the money more than made up for it, it would have been rational from a normal expected-utility perspective to play the lottery. My scenario was assuming that, with sufficient computational power, you would know that playing the lottery wasn’t rational.
We’re not disagreeing about the value of the lottery—it was, by stipulation, a losing bet—we are disagreeing about the proper attitude towards the news of having won the lottery.
I don’t think I understand the difference in opinion well enough to discover the origin of it.
In the real world, we don’t get to make any decision. The filter hits us or it doesn’t.
If it hits early, then we shouldn’t exist (good news: we do!). If it hits late, then WE’RE ALL GOING TO DIE!
In other words, I agree that it’s about subjective anticipation, but would point out that the end of the world is “bad news” even if you got to live in the first place. It’s just not as bad as never having existed.
Nick is wondering whether we can stop worrying about the filter (if we’re already past it). Any evidence we have that complex life develops before the filter would then cause us to believe in the late filter, leaving it still in our future, and thus still something to worry about and strive against. Not as bad as an early filter, but something far more worrisome, since it is still to come.
Depends on what you mean with ” find that you’ve made a good decision”, but probably yes. A decision is either rational given the information you had available or it’s not. Do you mean finding out you made a rational decision that you forgot about? Or making the right decision for the wrong reasons and later finding out the correct reasons? Or finding additional evidence that increases the difference in expected utility for making the choice you made?
Finding out you have a brain tumor is bad news. Visiting the doctor when you have the characteristic headache is a rational decision, and an even better decision when in the third sense when you turn out to actually have a brain tumor. Finding a tumor would retroactively make a visit to the doctor a good decision in the second sense even if it originally was for irrational reasons. And in the first sense, if you somehow forgot about the whole thing in the mean time I guess being diagnosed would remind you of the original decision.
Bad news is news that reduces your expectation of utility. Why should a rational actor lack that concept? If you don’t have a concept for that you might confuse things that change expectation of utility for things that change utility and accidentally end up just maximizing the expectation of utility when you try to maximize expected utility.
UPDATE: This comment clearly misses the point. Don’t bother reading it.
Well, the worse you turn out to have done within the space of possible choices/outcomes, the more optimistic you should be about your ability to do better in the future, relative to the current trend.
For example, if I find out that I am being underpaid for my time, while this may offend my sense of justice, it is good news about future salary relative to my prior forecast, because it means it should be easier than I thought to be paid more, all else equal.
Generally, if I find that my past decisions have all been perfect given the information available at the time, I can’t expect to materially improve my future by better decisionmaking, while if I find errors that were avoidable at the time, then if I fix these errors going forward, I should expect an improvement. This is “good news” insofar as it expands the space of likely outcomes in a utility-positive direction, and so should raise the utility of the expected (average) outcome.
So, according to your definition of “good news” and “bad news”, it might be bad news to find that you’ve made a good decision, and good news to find that you’ve made a bad decision? Why would a rational agent want to have such a concept of good and bad news?
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Let us suppose that, when you were drunk, you were computationally limited in a way that made playing the lottery (seem to be) the best decision given your computational limitations. Now, in your sober state, you are more computationally powerful, and you can see that playing the lottery last night was a bad decision (given your current computational power but minus the knowledge that your numbers would win). Nonetheless, learning that you played and won is good news. After all, maybe you can use your winnings to become even more computationally powerful, so that you don’t make such bad decisions in the future.
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
Whether these copies exist or not, and their measure could depend on details of the lotteries’ implementation. If it’s a classical lottery, all the (reasonable) quantum branches from the point you decided could have the same numbers.
I want to be careful to distinguish Many-Worlds (MW) branches from theoretical possibilities (with respect to my best theory). Events in MW-branches actually happen. Theoretical possibilities, however, may not. (I say this to clarify my position, which I know differs from yours. I am not here justifying these claims.)
My thought experiment was supposed to be about theoretical possibility, not about what happens in some MW-branches but not others.
But I’ll recast the situation in terms MW-branches, because this is analogous to your scenario in your link. All of the MW-branches very probably exist, and I agree that I ought to care about them without regard to which one “I” am or will be subjectively experiencing.
So, if learning that I played and won the lottery in “my” MW-branch doesn’t significantly change my expectation of the measures of MW-branches in which I play or win, then it is neither good news nor bad news.
However, as wnoise points out, some theoretical possibilities may happen in practically no MW-branches.
This brings us to theoretical possibilities. What are my expected measures of MW-branches in which I play and in which I win? If I learn news N that revises my expected measures in the right way, so that the total utility of all branches is greater, then N is good news. This is the kind of news that I was talking about, news that changes my expectations of which of the various theoretical possibilities are in fact realized.
I’m very surprised that this was downvoted. I would appreciate an explanation of the downvote.
This is the point at which believing in many worlds and caring about other branches leads to very suspicious way to perceive reality. I know, absurdity heuristic isn’t that much reliable, but still—would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years?
I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
Caring about other branches doesn’t imply having congruent emotional reactions to beliefs about them. Emotions aren’t preferences.
Emotions are not preferences, but I believe they can’t be completely disentangled. There is something wrong with a person who feels unhappy after learning that the world has changed towards his/her prefered state.
I don’t see how you can effectively apply social standards like “something wrong” to a mind that implements UDT. There are no human minds or non-human minds that I am aware of that perfectly implement UDT. There are no known societies of beings that do. It stands to reason that such a society would seem very other if judged by the social standards of a society composed of standard human minds.
When discussing UDT outcomes you have to work around that part of you that wants to immediately “correct” the outcome by applying non-UDT reasoning.
That “something wrong” was not as much of a social standard, as rather an expression of an intuitive feeling of a contradiction, which I wasn’t able to specify more explicitly. I could anticipate general objections such as yours, however, it would help if you can be more concrete here. The question is whether one can say he prefers the state of world where he dies soon with 99% probability, even if he would be in fact disappointed after realising that it was really going to happen. I think we are now at risk of redefining few words (like preference) to mean something quite different from what they used to mean, which I don’t find good at all.
And by the way, why is this a question of decision theory? There is no decision in the discussed scenario, only a question whether some news can be considered good or bad.
I don’t know if this is exactly the kind of thing you’re looking for, but you might like this paper arguing for why many-worlds doesn’t imply quantum immortality and like-minded conclusions based on jumping between branches. (I saw someone cite this a few days ago somewhere on Less Wrong, and I’d give them props here, but can’t remember who they were!)
It was Mallah, probably.
You’re probably right—going through Mallah’s comment history, I think it might have been this post of his that turned me on to his paper. Thanks Mallah!
It’s good news because you just gained a big pile of utility last night.
Yes, learning that you’re not very smart when drunk is bad news, but the money more than makes up for.
Wei_Dai is saying that all the other copies of you that didn’t win lost more than enough utility to make up for it. This is far from a universally accepted utility measure, of course.
So Wei_Dai’s saying the money doesn’t more than make up for? That’s clever, but I’m not sure it actually works.
Had the money more than made up for it, it would have been rational from a normal expected-utility perspective to play the lottery. My scenario was assuming that, with sufficient computational power, you would know that playing the lottery wasn’t rational.
We’re not disagreeing about the value of the lottery—it was, by stipulation, a losing bet—we are disagreeing about the proper attitude towards the news of having won the lottery.
I don’t think I understand the difference in opinion well enough to discover the origin of it.
I must have misunderstood you, then. I think that we agree about having a positive attitude toward having won.
In the real world, we don’t get to make any decision. The filter hits us or it doesn’t.
If it hits early, then we shouldn’t exist (good news: we do!). If it hits late, then WE’RE ALL GOING TO DIE!
In other words, I agree that it’s about subjective anticipation, but would point out that the end of the world is “bad news” even if you got to live in the first place. It’s just not as bad as never having existed.
Nick is wondering whether we can stop worrying about the filter (if we’re already past it). Any evidence we have that complex life develops before the filter would then cause us to believe in the late filter, leaving it still in our future, and thus still something to worry about and strive against. Not as bad as an early filter, but something far more worrisome, since it is still to come.
Depends on what you mean with ” find that you’ve made a good decision”, but probably yes. A decision is either rational given the information you had available or it’s not. Do you mean finding out you made a rational decision that you forgot about? Or making the right decision for the wrong reasons and later finding out the correct reasons? Or finding additional evidence that increases the difference in expected utility for making the choice you made?
Finding out you have a brain tumor is bad news. Visiting the doctor when you have the characteristic headache is a rational decision, and an even better decision when in the third sense when you turn out to actually have a brain tumor. Finding a tumor would retroactively make a visit to the doctor a good decision in the second sense even if it originally was for irrational reasons. And in the first sense, if you somehow forgot about the whole thing in the mean time I guess being diagnosed would remind you of the original decision.
Bad news is news that reduces your expectation of utility. Why should a rational actor lack that concept? If you don’t have a concept for that you might confuse things that change expectation of utility for things that change utility and accidentally end up just maximizing the expectation of utility when you try to maximize expected utility.
UPDATE: This comment clearly misses the point. Don’t bother reading it.
Well, the worse you turn out to have done within the space of possible choices/outcomes, the more optimistic you should be about your ability to do better in the future, relative to the current trend.
For example, if I find out that I am being underpaid for my time, while this may offend my sense of justice, it is good news about future salary relative to my prior forecast, because it means it should be easier than I thought to be paid more, all else equal.
Generally, if I find that my past decisions have all been perfect given the information available at the time, I can’t expect to materially improve my future by better decisionmaking, while if I find errors that were avoidable at the time, then if I fix these errors going forward, I should expect an improvement. This is “good news” insofar as it expands the space of likely outcomes in a utility-positive direction, and so should raise the utility of the expected (average) outcome.