“Should I expect X?” seems equivalent to “Would I be correct in expecting X?” or somesuch...
Then my answer would be, Maybe you would be correct. But why would this imply that anthropics needs any additional “reducing”, or that something more than logic + physics is needed? It all still adds up to normality. You still make all the same decisions about what you should work to protect or prevent, what you should think about and try to bring about, etc. All the same things need to be done with exactly the same urgency. Your allegedly impending dissolution doesn’t change any of this.
Right. So, as I said, you are counselling that “anthropics” is practically not a problem, as even if there is a sense of “expect” in which it would be correct to expect the Boltzmann-brain scenario, this is not worth worrying about because it will not affect our decisions.
That’s a perfectly reasonable thing to say, but it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such. You’re just saying that we shouldn’t care about this particular bit of anthropics. Doesn’t mean that I wouldn’t be correct (or not) to expect my impending dissolution.
it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such.
I would have been “addressing the question of getting anthropics right” if I had talked about what the “I” in “I will dissolve” means, or about how I should go about assigning a probability to that indexical-laden proposition. I don’t think that I presented myself as doing that.
I’m also not saying that I’ve solved these problems, or that we shouldn’t work towards a general theory of anthropics that answers them.
The uselessness of anticipating that you will be a Boltzmann brain is particular to Boltzmann-brain scenarios. It is not a feature of anthropic problems in general. The Boltzmann brain is, by hypothesis, powerless to do anything to change its circumstances. That is what makes anticipating the scenario pointless. Most anthropic scenarios aren’t like this, and so it is much more reasonable to wonder how you should allocate “anticipation” to them.
The question of whether indexicals like “I” should play a role in how we allocate our anticipation — that question is open as far as I know.
My point was this. Eliezer seemed to be saying something like, “If a theory of anthropics reduces anthropics to physics+logic, then great. But if the theory does that at the cost of saying that I am probably a Boltzmann brain, then I consider that to be too high a price to pay. You’re going to have to work harder than that to convince me that I’m really and truly probably a Boltzmann brain.” I am saying that, even if a theory of anthropics says that “I am probably a Boltzmann brain” (where the theory explains what that “I” means), that is not a problem for the theory. If the theory is otherwise unproblematic, then I see no problem at all.
Then my answer would be, Maybe you would be correct. But why would this imply that anthropics needs any additional “reducing”, or that something more than logic + physics is needed? It all still adds up to normality. You still make all the same decisions about what you should work to protect or prevent, what you should think about and try to bring about, etc. All the same things need to be done with exactly the same urgency. Your allegedly impending dissolution doesn’t change any of this.
Right. So, as I said, you are counselling that “anthropics” is practically not a problem, as even if there is a sense of “expect” in which it would be correct to expect the Boltzmann-brain scenario, this is not worth worrying about because it will not affect our decisions.
That’s a perfectly reasonable thing to say, but it’s not actually addressing the question of getting anthropics right, and it’s misleading to present it as such. You’re just saying that we shouldn’t care about this particular bit of anthropics. Doesn’t mean that I wouldn’t be correct (or not) to expect my impending dissolution.
I would have been “addressing the question of getting anthropics right” if I had talked about what the “I” in “I will dissolve” means, or about how I should go about assigning a probability to that indexical-laden proposition. I don’t think that I presented myself as doing that.
I’m also not saying that I’ve solved these problems, or that we shouldn’t work towards a general theory of anthropics that answers them.
The uselessness of anticipating that you will be a Boltzmann brain is particular to Boltzmann-brain scenarios. It is not a feature of anthropic problems in general. The Boltzmann brain is, by hypothesis, powerless to do anything to change its circumstances. That is what makes anticipating the scenario pointless. Most anthropic scenarios aren’t like this, and so it is much more reasonable to wonder how you should allocate “anticipation” to them.
The question of whether indexicals like “I” should play a role in how we allocate our anticipation — that question is open as far as I know.
My point was this. Eliezer seemed to be saying something like, “If a theory of anthropics reduces anthropics to physics+logic, then great. But if the theory does that at the cost of saying that I am probably a Boltzmann brain, then I consider that to be too high a price to pay. You’re going to have to work harder than that to convince me that I’m really and truly probably a Boltzmann brain.” I am saying that, even if a theory of anthropics says that “I am probably a Boltzmann brain” (where the theory explains what that “I” means), that is not a problem for the theory. If the theory is otherwise unproblematic, then I see no problem at all.