You’ve just redefined “expect” so that the problem goes away. For sure, there’s no point practically worrying about outcomes that you can’t do anything about, but that doesn’t mean you shouldn’t expect them. If you want to argue that we should use different notion than “expect”, or that the practical considerations show that the Boltzmann-brain argument isn’t a problem, that’s fine, but this has all the benefits of theft over honest toil.
You’ve just redefined “expect” so that the problem goes away.
I don’t believe that there is any redefinition going on here. I intend to use “expect” in exactly the usual sense, which I take also to be the sense that Eliezer was using when he wrote “I have not yet encountered a claim to have finished Reducing anthropics which … does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant”.
Both he and I are referring to a particular mental activity, namely the activity that is normally called “expecting”. With regard to this very same activity, I am addressing the question of whether one “should expect [one’s] experiences to dissolve into Boltzmann-brain chaos in the next instant”. (Emphasis added.)
The potentially controversial claim in my argument is not the definition of “expect”. That definition is supposed to be utterly standard. The controversial claim is about when one ought to expect. The “standard view” is that one ought to expect an event just when that event has a probability of happening that is greater than some threshold. To argue against this view, I am pointing to the fact that expecting an event is a certain mental act. Since it is an act, a proper justification for doing it should take into account utilities as well as probabilities. My claim is that, once one takes the relevant utilities into account, one easily sees that one shouldn’t expect oneself to dissolve into Boltzmann-brain chaos, even if that dissolution is overwhelmingly likely to happen.
Ah, okay. You’re quite right then, I misdiagnosed what you were trying to do. I still think it’s wrong, though.
In particular, I don’t think the “should” in that sentence works the way you’re claiming that it does. In context, “Should I expect X?” seems equivalent to “Would I be correct in expecting X?” or somesuch, rather than “Ought I (practically/morally) to expect X?”. English is not so well-behaved as that. I guess it kind of looks like perhaps it’s an epistemic-rationality “should”, but I’m not sure it’s even that.
“Should I expect X?” seems equivalent to “Would I be correct in expecting X?” or somesuch...
Then my answer would be, Maybe you would be correct. But why would this imply that anthropics needs any additional “reducing”, or that something more than logic + physics is needed? It all still adds up to normality. You still make all the same decisions about what you should work to protect or prevent, what you should think about and try to bring about, etc. All the same things need to be done with exactly the same urgency. Your allegedly impending dissolution doesn’t change any of this.
Right. So, as I said, you are counselling that “anthropics” is practically not a problem, as even if there is a sense of “expect” in which it would be correct to expect the Boltzmann-brain scenario, this is not worth worrying about because it will not affect our decisions.
That’s a perfectly reasonable thing to say, but it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such. You’re just saying that we shouldn’t care about this particular bit of anthropics. Doesn’t mean that I wouldn’t be correct (or not) to expect my impending dissolution.
it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such.
I would have been “addressing the question of getting anthropics right” if I had talked about what the “I” in “I will dissolve” means, or about how I should go about assigning a probability to that indexical-laden proposition. I don’t think that I presented myself as doing that.
I’m also not saying that I’ve solved these problems, or that we shouldn’t work towards a general theory of anthropics that answers them.
The uselessness of anticipating that you will be a Boltzmann brain is particular to Boltzmann-brain scenarios. It is not a feature of anthropic problems in general. The Boltzmann brain is, by hypothesis, powerless to do anything to change its circumstances. That is what makes anticipating the scenario pointless. Most anthropic scenarios aren’t like this, and so it is much more reasonable to wonder how you should allocate “anticipation” to them.
The question of whether indexicals like “I” should play a role in how we allocate our anticipation — that question is open as far as I know.
My point was this. Eliezer seemed to be saying something like, “If a theory of anthropics reduces anthropics to physics+logic, then great. But if the theory does that at the cost of saying that I am probably a Boltzmann brain, then I consider that to be too high a price to pay. You’re going to have to work harder than that to convince me that I’m really and truly probably a Boltzmann brain.” I am saying that, even if a theory of anthropics says that “I am probably a Boltzmann brain” (where the theory explains what that “I” means), that is not a problem for the theory. If the theory is otherwise unproblematic, then I see no problem at all.
It sounds like solving a different problem. Like I said, it’s fine to claim that we should use a different notion than the one that we do, but changing it by fiat and then claiming there’s no problem is not doing that.
You’ve just redefined “expect” so that the problem goes away. For sure, there’s no point practically worrying about outcomes that you can’t do anything about, but that doesn’t mean you shouldn’t expect them. If you want to argue that we should use different notion than “expect”, or that the practical considerations show that the Boltzmann-brain argument isn’t a problem, that’s fine, but this has all the benefits of theft over honest toil.
I don’t believe that there is any redefinition going on here. I intend to use “expect” in exactly the usual sense, which I take also to be the sense that Eliezer was using when he wrote “I have not yet encountered a claim to have finished Reducing anthropics which … does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant”.
Both he and I are referring to a particular mental activity, namely the activity that is normally called “expecting”. With regard to this very same activity, I am addressing the question of whether one “should expect [one’s] experiences to dissolve into Boltzmann-brain chaos in the next instant”. (Emphasis added.)
The potentially controversial claim in my argument is not the definition of “expect”. That definition is supposed to be utterly standard. The controversial claim is about when one ought to expect. The “standard view” is that one ought to expect an event just when that event has a probability of happening that is greater than some threshold. To argue against this view, I am pointing to the fact that expecting an event is a certain mental act. Since it is an act, a proper justification for doing it should take into account utilities as well as probabilities. My claim is that, once one takes the relevant utilities into account, one easily sees that one shouldn’t expect oneself to dissolve into Boltzmann-brain chaos, even if that dissolution is overwhelmingly likely to happen.
Ah, okay. You’re quite right then, I misdiagnosed what you were trying to do. I still think it’s wrong, though.
In particular, I don’t think the “should” in that sentence works the way you’re claiming that it does. In context, “Should I expect X?” seems equivalent to “Would I be correct in expecting X?” or somesuch, rather than “Ought I (practically/morally) to expect X?”. English is not so well-behaved as that. I guess it kind of looks like perhaps it’s an epistemic-rationality “should”, but I’m not sure it’s even that.
Then my answer would be, Maybe you would be correct. But why would this imply that anthropics needs any additional “reducing”, or that something more than logic + physics is needed? It all still adds up to normality. You still make all the same decisions about what you should work to protect or prevent, what you should think about and try to bring about, etc. All the same things need to be done with exactly the same urgency. Your allegedly impending dissolution doesn’t change any of this.
Right. So, as I said, you are counselling that “anthropics” is practically not a problem, as even if there is a sense of “expect” in which it would be correct to expect the Boltzmann-brain scenario, this is not worth worrying about because it will not affect our decisions.
That’s a perfectly reasonable thing to say, but it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such. You’re just saying that we shouldn’t care about this particular bit of anthropics. Doesn’t mean that I wouldn’t be correct (or not) to expect my impending dissolution.
I would have been “addressing the question of getting anthropics right” if I had talked about what the “I” in “I will dissolve” means, or about how I should go about assigning a probability to that indexical-laden proposition. I don’t think that I presented myself as doing that.
I’m also not saying that I’ve solved these problems, or that we shouldn’t work towards a general theory of anthropics that answers them.
The uselessness of anticipating that you will be a Boltzmann brain is particular to Boltzmann-brain scenarios. It is not a feature of anthropic problems in general. The Boltzmann brain is, by hypothesis, powerless to do anything to change its circumstances. That is what makes anticipating the scenario pointless. Most anthropic scenarios aren’t like this, and so it is much more reasonable to wonder how you should allocate “anticipation” to them.
The question of whether indexicals like “I” should play a role in how we allocate our anticipation — that question is open as far as I know.
My point was this. Eliezer seemed to be saying something like, “If a theory of anthropics reduces anthropics to physics+logic, then great. But if the theory does that at the cost of saying that I am probably a Boltzmann brain, then I consider that to be too high a price to pay. You’re going to have to work harder than that to convince me that I’m really and truly probably a Boltzmann brain.” I am saying that, even if a theory of anthropics says that “I am probably a Boltzmann brain” (where the theory explains what that “I” means), that is not a problem for the theory. If the theory is otherwise unproblematic, then I see no problem at all.
That… sounds like success to me. Did you want him to redefine it so the problem stuck around?
It sounds like solving a different problem. Like I said, it’s fine to claim that we should use a different notion than the one that we do, but changing it by fiat and then claiming there’s no problem is not doing that.