Blerghhhh okay I’ll just write down the thoughts as they come to me, then use the mess later at some point. Maybe that’ll interest you.
Pretty sure the conclusion was like “anthropic explanations make sense, but not anthropic updates”. E.g. anthropic pseudo-explanations of why my lane is so much busier than the one next to me make sense. That’s because they only summarize knowledge I already have for other reasons—I already know the lane I’m in has more people in it, that was presupposed in asking the question of why my lane has so many cars.
Okay this is a different line of reasoning but I’ll just go on about it till I remember the other one. They share themes.
Okay, so, in lots of anthropics problems I’m given some hypothetical person in some hypothetical scenario and told to pretend I’m then and then I’m asked how I should update to find myself in that scenario.
But I’m not actually them—I’m actually me.
I can explain how I ended up as me using decision theoretic reasoning (—and meta level concerns, naturally). The reasoning goes, I expect to find myself in important scenarios. But that decision theoretic reasoning simply wouldn’t explain the vast majority of people finding themselves as them, who are not in important scenarios.
I simply can’t explain how I would counterfactually find myself as someone not in an important scenario. It’s like a blue tentacle thing. Luckily I don’t have to. There’s no improbability left.
If I counterfactually did find myself as someone seemingly in an unimportant scenario I would be very confused. I would be compelled to update in favor of hidden interestingness?
Luckily such scenarios will always be counterfactual. It’s a law of metaphysics. No one should ever have to anthropically update. I’m “lucky” in some not-improbable sense.
I shouldn’t know how to update in impossibly unlikely because unimportant counterfactual scenarios, for the same reason I shouldn’t be able to explain a blue tentacle.
You can come up with thought experiments where the choice is stipulated to be extremely important. Still counterfactual, still not actually important.
This theory of anthropics is totally useless for people who aren’t me. As it should be. Anthropics shouldn’t provide updating procedures for non-existent people. Or something.
...This wasn’t the line of reasoning I wanted to elucidate and I still don’t remember how that one went. This line of reasoning does have the double-counting theme, but it also brings in a controversial conclusion from decision theory, and is also solipsistic, which is useless for group epistemology. Maybe there was some generalization or something… Bleh, probably wrote something down somewhere.
Blerghhhh okay I’ll just write down the thoughts as they come to me, then use the mess later at some point. Maybe that’ll interest you.
Pretty sure the conclusion was like “anthropic explanations make sense, but not anthropic updates”. E.g. anthropic pseudo-explanations of why my lane is so much busier than the one next to me make sense. That’s because they only summarize knowledge I already have for other reasons—I already know the lane I’m in has more people in it, that was presupposed in asking the question of why my lane has so many cars.
Okay this is a different line of reasoning but I’ll just go on about it till I remember the other one. They share themes.
Okay, so, in lots of anthropics problems I’m given some hypothetical person in some hypothetical scenario and told to pretend I’m then and then I’m asked how I should update to find myself in that scenario.
But I’m not actually them—I’m actually me.
I can explain how I ended up as me using decision theoretic reasoning (—and meta level concerns, naturally). The reasoning goes, I expect to find myself in important scenarios. But that decision theoretic reasoning simply wouldn’t explain the vast majority of people finding themselves as them, who are not in important scenarios.
I simply can’t explain how I would counterfactually find myself as someone not in an important scenario. It’s like a blue tentacle thing. Luckily I don’t have to. There’s no improbability left.
If I counterfactually did find myself as someone seemingly in an unimportant scenario I would be very confused. I would be compelled to update in favor of hidden interestingness?
Luckily such scenarios will always be counterfactual. It’s a law of metaphysics. No one should ever have to anthropically update. I’m “lucky” in some not-improbable sense.
I shouldn’t know how to update in impossibly unlikely because unimportant counterfactual scenarios, for the same reason I shouldn’t be able to explain a blue tentacle.
You can come up with thought experiments where the choice is stipulated to be extremely important. Still counterfactual, still not actually important.
This theory of anthropics is totally useless for people who aren’t me. As it should be. Anthropics shouldn’t provide updating procedures for non-existent people. Or something.
...This wasn’t the line of reasoning I wanted to elucidate and I still don’t remember how that one went. This line of reasoning does have the double-counting theme, but it also brings in a controversial conclusion from decision theory, and is also solipsistic, which is useless for group epistemology. Maybe there was some generalization or something… Bleh, probably wrote something down somewhere.