For any person who actually understands anthropics, there are 10 people who ask questions without understanding (and 0.1 people who know anthropic better) - but it doesn’t change my relative location in the middle. No matter if there are 20 people behind me and 20 ahead – or 200 behind and 200 ahead, – if all of them live in the same time interval, like between 1983 and 2050.
However, before making any anthropic bet, I need to take into account logical uncertainty, that is, the probability that anthropic is not bullshit. I estimate such meta-level uncertainty as 0.5.(wrote more about in the meta-doomsday argument text).
Them knowing anthropics better than you only makes a difference insofar as they utilitise a different algorithm/make decisions in a way that is disconnected from you. For example, if we are discussing anthropics problem X which you can both solve and they can solve Y and Z which you can’t, that is irrelevant here as we are only asking about X. Anyway, I don’t think you can assume that people will be evenly distributed. We might hypothesis, for example, that the level of anthropics knowledge will go up over time.
“However, before making any anthropic bet, I need to take into account logical uncertainty”—that seems like a reasonable thing to do. However, at this particular time, I’m only trying to solve anthropics from the inside view, not from the outside view. The later is valuable, but I prefer to focus on one part of a problem at a time.
For any person who actually understands anthropics, there are 10 people who ask questions without understanding (and 0.1 people who know anthropic better) - but it doesn’t change my relative location in the middle. No matter if there are 20 people behind me and 20 ahead – or 200 behind and 200 ahead, – if all of them live in the same time interval, like between 1983 and 2050.
However, before making any anthropic bet, I need to take into account logical uncertainty, that is, the probability that anthropic is not bullshit. I estimate such meta-level uncertainty as 0.5.(wrote more about in the meta-doomsday argument text).
Them knowing anthropics better than you only makes a difference insofar as they utilitise a different algorithm/make decisions in a way that is disconnected from you. For example, if we are discussing anthropics problem X which you can both solve and they can solve Y and Z which you can’t, that is irrelevant here as we are only asking about X. Anyway, I don’t think you can assume that people will be evenly distributed. We might hypothesis, for example, that the level of anthropics knowledge will go up over time.
“However, before making any anthropic bet, I need to take into account logical uncertainty”—that seems like a reasonable thing to do. However, at this particular time, I’m only trying to solve anthropics from the inside view, not from the outside view. The later is valuable, but I prefer to focus on one part of a problem at a time.