(I have not yet encountered a claim to have finished Reducing anthropics which (a) ends up with only two kinds of stuff and (b) does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant, given that if all this talk of ‘degree of realness’ is nonsense, there is no way to say that physically-lawful copies of me are more common than Boltzmann brain copies of me.)
I think it was Vladimir Nesov who said something like the following: Anticipation is just what it feels like when your brain has decided that it makes sense to pre-compute now what it will do if it has some particular possible future experience. You should expect experiences only if expecting (i.e., thinking about in advance) those experiences has greater expected value than thinking about other things.
On this view, which seems right to me, you shouldn’t expect to dissolve into Boltzmann-brain chaos. This is because you know that any labor that you expend on that expectation will be totally wasted. If you find yourself starting to dissolve, you won’t look back on your present self and think, “If only I’d thought in advance about what to do in this situation. I could have been prepared. I could be doing something right now to improve my lot.”
Consider an analogous situation. You’re strapped to a bed in a metal box, utterly immobilized and living a miserable life. Intravenous tubes are keeping you alive. You know that you are powerless to escape. In fact, you know that you are absolutely powerless to make your life in here any better or worse. You know that, tomorrow, your captors will roll a million-sided die, with sides numbered one to a million. If the die comes up “1”, you will be released, free to make the best of your life in the wide-open world. Otherwise, if any other side of the die come up, you will remain confined as you are now until you die. There will be no other chances for any change in your circumstances.
Clearly you are more likely to spend the rest of your life in the box. But should you spend any time anticipating that? Of course not. What would be the point? You should spend all of you mental effort on figuring out the best thing to do if you are released. Your expected utility is maximized by thinking only about this scenario, even though it is very improbable. Even a single thought given to the alternative possibility is a wasted thought. You should not anticipate confinement after tomorrow. You should not expect to be confined after tomorrow. These mental activities are maximally bad options for what you could be doing with your time right now.
You’ve just redefined “expect” so that the problem goes away. For sure, there’s no point practically worrying about outcomes that you can’t do anything about, but that doesn’t mean you shouldn’t expect them. If you want to argue that we should use different notion than “expect”, or that the practical considerations show that the Boltzmann-brain argument isn’t a problem, that’s fine, but this has all the benefits of theft over honest toil.
You’ve just redefined “expect” so that the problem goes away.
I don’t believe that there is any redefinition going on here. I intend to use “expect” in exactly the usual sense, which I take also to be the sense that Eliezer was using when he wrote “I have not yet encountered a claim to have finished Reducing anthropics which … does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant”.
Both he and I are referring to a particular mental activity, namely the activity that is normally called “expecting”. With regard to this very same activity, I am addressing the question of whether one “should expect [one’s] experiences to dissolve into Boltzmann-brain chaos in the next instant”. (Emphasis added.)
The potentially controversial claim in my argument is not the definition of “expect”. That definition is supposed to be utterly standard. The controversial claim is about when one ought to expect. The “standard view” is that one ought to expect an event just when that event has a probability of happening that is greater than some threshold. To argue against this view, I am pointing to the fact that expecting an event is a certain mental act. Since it is an act, a proper justification for doing it should take into account utilities as well as probabilities. My claim is that, once one takes the relevant utilities into account, one easily sees that one shouldn’t expect oneself to dissolve into Boltzmann-brain chaos, even if that dissolution is overwhelmingly likely to happen.
Ah, okay. You’re quite right then, I misdiagnosed what you were trying to do. I still think it’s wrong, though.
In particular, I don’t think the “should” in that sentence works the way you’re claiming that it does. In context, “Should I expect X?” seems equivalent to “Would I be correct in expecting X?” or somesuch, rather than “Ought I (practically/morally) to expect X?”. English is not so well-behaved as that. I guess it kind of looks like perhaps it’s an epistemic-rationality “should”, but I’m not sure it’s even that.
“Should I expect X?” seems equivalent to “Would I be correct in expecting X?” or somesuch...
Then my answer would be, Maybe you would be correct. But why would this imply that anthropics needs any additional “reducing”, or that something more than logic + physics is needed? It all still adds up to normality. You still make all the same decisions about what you should work to protect or prevent, what you should think about and try to bring about, etc. All the same things need to be done with exactly the same urgency. Your allegedly impending dissolution doesn’t change any of this.
Right. So, as I said, you are counselling that “anthropics” is practically not a problem, as even if there is a sense of “expect” in which it would be correct to expect the Boltzmann-brain scenario, this is not worth worrying about because it will not affect our decisions.
That’s a perfectly reasonable thing to say, but it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such. You’re just saying that we shouldn’t care about this particular bit of anthropics. Doesn’t mean that I wouldn’t be correct (or not) to expect my impending dissolution.
it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such.
I would have been “addressing the question of getting anthropics right” if I had talked about what the “I” in “I will dissolve” means, or about how I should go about assigning a probability to that indexical-laden proposition. I don’t think that I presented myself as doing that.
I’m also not saying that I’ve solved these problems, or that we shouldn’t work towards a general theory of anthropics that answers them.
The uselessness of anticipating that you will be a Boltzmann brain is particular to Boltzmann-brain scenarios. It is not a feature of anthropic problems in general. The Boltzmann brain is, by hypothesis, powerless to do anything to change its circumstances. That is what makes anticipating the scenario pointless. Most anthropic scenarios aren’t like this, and so it is much more reasonable to wonder how you should allocate “anticipation” to them.
The question of whether indexicals like “I” should play a role in how we allocate our anticipation — that question is open as far as I know.
My point was this. Eliezer seemed to be saying something like, “If a theory of anthropics reduces anthropics to physics+logic, then great. But if the theory does that at the cost of saying that I am probably a Boltzmann brain, then I consider that to be too high a price to pay. You’re going to have to work harder than that to convince me that I’m really and truly probably a Boltzmann brain.” I am saying that, even if a theory of anthropics says that “I am probably a Boltzmann brain” (where the theory explains what that “I” means), that is not a problem for the theory. If the theory is otherwise unproblematic, then I see no problem at all.
It sounds like solving a different problem. Like I said, it’s fine to claim that we should use a different notion than the one that we do, but changing it by fiat and then claiming there’s no problem is not doing that.
I think it was Vladimir Nesov who said something like the following: Anticipation is just what it feels like when your brain has decided that it makes sense to pre-compute now what it will do if it has some particular possible future experience. You should expect experiences only if expecting (i.e., thinking about in advance) those experiences has greater expected value than thinking about other things.
On this view, which seems right to me, you shouldn’t expect to dissolve into Boltzmann-brain chaos. This is because you know that any labor that you expend on that expectation will be totally wasted. If you find yourself starting to dissolve, you won’t look back on your present self and think, “If only I’d thought in advance about what to do in this situation. I could have been prepared. I could be doing something right now to improve my lot.”
Consider an analogous situation. You’re strapped to a bed in a metal box, utterly immobilized and living a miserable life. Intravenous tubes are keeping you alive. You know that you are powerless to escape. In fact, you know that you are absolutely powerless to make your life in here any better or worse. You know that, tomorrow, your captors will roll a million-sided die, with sides numbered one to a million. If the die comes up “1”, you will be released, free to make the best of your life in the wide-open world. Otherwise, if any other side of the die come up, you will remain confined as you are now until you die. There will be no other chances for any change in your circumstances.
Clearly you are more likely to spend the rest of your life in the box. But should you spend any time anticipating that? Of course not. What would be the point? You should spend all of you mental effort on figuring out the best thing to do if you are released. Your expected utility is maximized by thinking only about this scenario, even though it is very improbable. Even a single thought given to the alternative possibility is a wasted thought. You should not anticipate confinement after tomorrow. You should not expect to be confined after tomorrow. These mental activities are maximally bad options for what you could be doing with your time right now.
You’ve just redefined “expect” so that the problem goes away. For sure, there’s no point practically worrying about outcomes that you can’t do anything about, but that doesn’t mean you shouldn’t expect them. If you want to argue that we should use different notion than “expect”, or that the practical considerations show that the Boltzmann-brain argument isn’t a problem, that’s fine, but this has all the benefits of theft over honest toil.
I don’t believe that there is any redefinition going on here. I intend to use “expect” in exactly the usual sense, which I take also to be the sense that Eliezer was using when he wrote “I have not yet encountered a claim to have finished Reducing anthropics which … does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant”.
Both he and I are referring to a particular mental activity, namely the activity that is normally called “expecting”. With regard to this very same activity, I am addressing the question of whether one “should expect [one’s] experiences to dissolve into Boltzmann-brain chaos in the next instant”. (Emphasis added.)
The potentially controversial claim in my argument is not the definition of “expect”. That definition is supposed to be utterly standard. The controversial claim is about when one ought to expect. The “standard view” is that one ought to expect an event just when that event has a probability of happening that is greater than some threshold. To argue against this view, I am pointing to the fact that expecting an event is a certain mental act. Since it is an act, a proper justification for doing it should take into account utilities as well as probabilities. My claim is that, once one takes the relevant utilities into account, one easily sees that one shouldn’t expect oneself to dissolve into Boltzmann-brain chaos, even if that dissolution is overwhelmingly likely to happen.
Ah, okay. You’re quite right then, I misdiagnosed what you were trying to do. I still think it’s wrong, though.
In particular, I don’t think the “should” in that sentence works the way you’re claiming that it does. In context, “Should I expect X?” seems equivalent to “Would I be correct in expecting X?” or somesuch, rather than “Ought I (practically/morally) to expect X?”. English is not so well-behaved as that. I guess it kind of looks like perhaps it’s an epistemic-rationality “should”, but I’m not sure it’s even that.
Then my answer would be, Maybe you would be correct. But why would this imply that anthropics needs any additional “reducing”, or that something more than logic + physics is needed? It all still adds up to normality. You still make all the same decisions about what you should work to protect or prevent, what you should think about and try to bring about, etc. All the same things need to be done with exactly the same urgency. Your allegedly impending dissolution doesn’t change any of this.
Right. So, as I said, you are counselling that “anthropics” is practically not a problem, as even if there is a sense of “expect” in which it would be correct to expect the Boltzmann-brain scenario, this is not worth worrying about because it will not affect our decisions.
That’s a perfectly reasonable thing to say, but it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such. You’re just saying that we shouldn’t care about this particular bit of anthropics. Doesn’t mean that I wouldn’t be correct (or not) to expect my impending dissolution.
I would have been “addressing the question of getting anthropics right” if I had talked about what the “I” in “I will dissolve” means, or about how I should go about assigning a probability to that indexical-laden proposition. I don’t think that I presented myself as doing that.
I’m also not saying that I’ve solved these problems, or that we shouldn’t work towards a general theory of anthropics that answers them.
The uselessness of anticipating that you will be a Boltzmann brain is particular to Boltzmann-brain scenarios. It is not a feature of anthropic problems in general. The Boltzmann brain is, by hypothesis, powerless to do anything to change its circumstances. That is what makes anticipating the scenario pointless. Most anthropic scenarios aren’t like this, and so it is much more reasonable to wonder how you should allocate “anticipation” to them.
The question of whether indexicals like “I” should play a role in how we allocate our anticipation — that question is open as far as I know.
My point was this. Eliezer seemed to be saying something like, “If a theory of anthropics reduces anthropics to physics+logic, then great. But if the theory does that at the cost of saying that I am probably a Boltzmann brain, then I consider that to be too high a price to pay. You’re going to have to work harder than that to convince me that I’m really and truly probably a Boltzmann brain.” I am saying that, even if a theory of anthropics says that “I am probably a Boltzmann brain” (where the theory explains what that “I” means), that is not a problem for the theory. If the theory is otherwise unproblematic, then I see no problem at all.
That… sounds like success to me. Did you want him to redefine it so the problem stuck around?
It sounds like solving a different problem. Like I said, it’s fine to claim that we should use a different notion than the one that we do, but changing it by fiat and then claiming there’s no problem is not doing that.