What is your probability that you’re the heavier brain?
Undefined. It matters a lot what rent the belief is paying. The specifics of how you’ll resolve your probability (or at least what differential evidence would let you update) will help you pick the reference class(es) which matter, and inform your choice of prior (in this case, the amalgam of experience and models, and unidentified previous evidence you’ve accumulated).
I don’t get a strong impression that you read the post. It was pretty clear about what rents the beliefs are paying.
Generally it sucks to see someone take a there is no answer, the question is ill-specified transcended analytic philosopher posture towards a decision problem (or a pair of specific decision problems that fall under it) that actually is extremely well-specified and that it seems like a genuinely good analytic philosopher should be able to answer. Over the course of our interactions I get the impression that you’re mainly about generating excuses to ignore any problem that surprises you too much, I’ve never seen you acknowledge or contribute to solving a conceptual problem. I love a good excuse to ignore a wrong question, but they haven’t been good excuses.
I would say it’s extremely unclear to me that the question “what is your probability that you are agent X” in an anthropic question like this is meaningful and has a well-defined answer? You said “there are practical reasons you’d like to know”, but you haven’t actually concretely specified what will be done with the information.
In the process of looking for something I had previously read about this, I found the following post:
Which demonstrates why “what rent the belief is paying” is critical:
If Beauty’s bets about the coin get paid out once per experiment, she will do best by acting as if the probability is one half. If the bets get paid out once per awakening, acting as if the probability is one third has the best expected value.
Which says, to me, that the probability is not uniquely defined—in the sense that a probability is really a claim about what sort of bets you would take, but in this case the way the bet is structured around different individuals/worlds is what controls the apparent “probability” you should choose to bet with.
Ahh. I’m familiar with that case. Did having that in mind make you feel like there’s too much ambiguity in the question to really want to dig into it. I wasn’t considering that sort of scenario (“they need to know their position” rules it out), but I can see why it would have come to mind.
I don’t get a strong impression that you read the post. It was pretty clear about what rents the beliefs are paying.
I think I did, and I just read it again, and still don’t see it. What anticipated experiences are contingent on this? What is the (potential) future evidence which will let you update your probability, and/or resolve whatever bets you’re making?
I do not have a lot of evidence or detailed thinking to support this viewpoint, but I think I agree with you. I have the general sense that anthropic probabilities like this do not necessarily have well-defined values.
I’m definitely open to that possibility, but it seems like we don’t really have a way of reliably distinguishing these sorts of anthropic probabilities out from like, ‘conventional’ probabilities? I’d guess it’s tangled up with the reference class selection problem.
Undefined. It matters a lot what rent the belief is paying. The specifics of how you’ll resolve your probability (or at least what differential evidence would let you update) will help you pick the reference class(es) which matter, and inform your choice of prior (in this case, the amalgam of experience and models, and unidentified previous evidence you’ve accumulated).
Wow. someone really didn’t like this. any reason for the strong downvotes?
I don’t get a strong impression that you read the post. It was pretty clear about what rents the beliefs are paying.
Generally it sucks to see someone take a there is no answer, the question is ill-specified transcended analytic philosopher posture towards a decision problem (or a pair of specific decision problems that fall under it) that actually is extremely well-specified and that it seems like a genuinely good analytic philosopher should be able to answer. Over the course of our interactions I get the impression that you’re mainly about generating excuses to ignore any problem that surprises you too much, I’ve never seen you acknowledge or contribute to solving a conceptual problem. I love a good excuse to ignore a wrong question, but they haven’t been good excuses.
I would say it’s extremely unclear to me that the question “what is your probability that you are agent X” in an anthropic question like this is meaningful and has a well-defined answer? You said “there are practical reasons you’d like to know”, but you haven’t actually concretely specified what will be done with the information.
In the process of looking for something I had previously read about this, I found the following post:
https://www.lesswrong.com/posts/y7jZ9BLEeuNTzgAE5/the-anthropic-trilemma
Which seems to be asking a very similar question to the one you’re considering. (It mentions Ebborians, but postdates that post significantly.)
I then found the thing I was actually looking for: https://www.lesswrong.com/tag/sleeping-beauty-paradox
Which demonstrates why “what rent the belief is paying” is critical:
Which says, to me, that the probability is not uniquely defined—in the sense that a probability is really a claim about what sort of bets you would take, but in this case the way the bet is structured around different individuals/worlds is what controls the apparent “probability” you should choose to bet with.
Ahh. I’m familiar with that case. Did having that in mind make you feel like there’s too much ambiguity in the question to really want to dig into it. I wasn’t considering that sort of scenario (“they need to know their position” rules it out), but I can see why it would have come to mind.
You might find this removed part relevant https://www.lesswrong.com/posts/gx6GEnpLkTXn3NFSS/we-need-a-theory-of-anthropic-measure-binding?commentId=mwuquFJHNCiYZFwzg
It acknowledges that some variants of the question can have that quality of.. not really needing to know their position.
I’m going to have to think about editing that stuff back in.
I think I did, and I just read it again, and still don’t see it. What anticipated experiences are contingent on this? What is the (potential) future evidence which will let you update your probability, and/or resolve whatever bets you’re making?
Well, ask the question, should the bigger brain receive a million dollar, or do you not care?
I do not have a lot of evidence or detailed thinking to support this viewpoint, but I think I agree with you. I have the general sense that anthropic probabilities like this do not necessarily have well-defined values.
I’m definitely open to that possibility, but it seems like we don’t really have a way of reliably distinguishing these sorts of anthropic probabilities out from like, ‘conventional’ probabilities? I’d guess it’s tangled up with the reference class selection problem.