I had that idea at first, but of the people asking the question, only some of them actually know how do anthropics. Others might be able to ask the anthropic question, but have no idea how to solve it, so toss up their hands and ignore the entire issue, in which case it is effectively the same as them never asking it in the first place. Others may make an error in their anthropic reasoning which you know how to avoid; similarly they aren’t in your reference class because their reasoning process is disconnected from yours. Whenever you make a decision, you are implicitly making a bet. Anthropic considerations alter how the bet plays plays out and in so far as you can account for this, you can account for anthropics.
For any person who actually understands anthropics, there are 10 people who ask questions without understanding (and 0.1 people who know anthropic better) - but it doesn’t change my relative location in the middle. No matter if there are 20 people behind me and 20 ahead – or 200 behind and 200 ahead, – if all of them live in the same time interval, like between 1983 and 2050.
However, before making any anthropic bet, I need to take into account logical uncertainty, that is, the probability that anthropic is not bullshit. I estimate such meta-level uncertainty as 0.5.(wrote more about in the meta-doomsday argument text).
Them knowing anthropics better than you only makes a difference insofar as they utilitise a different algorithm/make decisions in a way that is disconnected from you. For example, if we are discussing anthropics problem X which you can both solve and they can solve Y and Z which you can’t, that is irrelevant here as we are only asking about X. Anyway, I don’t think you can assume that people will be evenly distributed. We might hypothesis, for example, that the level of anthropics knowledge will go up over time.
“However, before making any anthropic bet, I need to take into account logical uncertainty”—that seems like a reasonable thing to do. However, at this particular time, I’m only trying to solve anthropics from the inside view, not from the outside view. The later is valuable, but I prefer to focus on one part of a problem at a time.
I had that idea at first, but of the people asking the question, only some of them actually know how do anthropics. Others might be able to ask the anthropic question, but have no idea how to solve it, so toss up their hands and ignore the entire issue, in which case it is effectively the same as them never asking it in the first place. Others may make an error in their anthropic reasoning which you know how to avoid; similarly they aren’t in your reference class because their reasoning process is disconnected from yours. Whenever you make a decision, you are implicitly making a bet. Anthropic considerations alter how the bet plays plays out and in so far as you can account for this, you can account for anthropics.
For any person who actually understands anthropics, there are 10 people who ask questions without understanding (and 0.1 people who know anthropic better) - but it doesn’t change my relative location in the middle. No matter if there are 20 people behind me and 20 ahead – or 200 behind and 200 ahead, – if all of them live in the same time interval, like between 1983 and 2050.
However, before making any anthropic bet, I need to take into account logical uncertainty, that is, the probability that anthropic is not bullshit. I estimate such meta-level uncertainty as 0.5.(wrote more about in the meta-doomsday argument text).
Them knowing anthropics better than you only makes a difference insofar as they utilitise a different algorithm/make decisions in a way that is disconnected from you. For example, if we are discussing anthropics problem X which you can both solve and they can solve Y and Z which you can’t, that is irrelevant here as we are only asking about X. Anyway, I don’t think you can assume that people will be evenly distributed. We might hypothesis, for example, that the level of anthropics knowledge will go up over time.
“However, before making any anthropic bet, I need to take into account logical uncertainty”—that seems like a reasonable thing to do. However, at this particular time, I’m only trying to solve anthropics from the inside view, not from the outside view. The later is valuable, but I prefer to focus on one part of a problem at a time.