This is the bit I don’t understand—if these agents are identical to me, then it follows that I’m probably a Boltzmann brain too...
In UDT you shouldn’t consider yourself to be just one of your clones. There is no probability measure on the set of your clones: you are all of them simultaneously. CDT is difficult to apply to situations with clones, unless you supplement it by some anthropic hypothesis like SIA or SSA. If you use an anthropic hypothesis, Boltzman brains will still get you in trouble. In fact, some cosmologists are trying to find models w/o Boltzman brains precise to avoid the conclusion that you are likely to be a Boltzman brain (although UDT shows the effort is misguided). The problem with UDT and Goedel incompleteness is a separate issue which has no relation to Boltzman brains.
I was meaning in the sense of measure theory. I’ve seen people discussing maximising the measure of a utility function over all future Everett branches...
I’m not sure what you mean here. Sets have measure, not functions.
I imagine a better approach would be to add the satisfying function to the time-discounting function, scaled in some suitable manner. This doesn’t intuitively strike me as a real utility function, as its adding apples and oranges so to speak, but perhaps useful as a tool?
Well, you still got all of the abovementioned problems except divergence.
...actually I was talking about alpha-point computation which I think may involve the creation of daughter universes inside black holes.
Hmm, baby universes are a possibility to consider. I thought the case for them is rather weak but a quick search revealed this. Regarding performing an infinite number of computations I’m pretty sure it doesn’t work.
CDT is difficult to apply to situations with clones, unless you supplement it by some anthropic hypothesis like SIA or SSA.
While I can see why there intuitive cause to abandon the “I am person #2, therefore there are probably not 100 people” reasoning, abandoning “There are 100 clones, therefore I’m probably not clone #1″ seems to be simply abandoning probability theory altogether, and I’m certainly not willing to bite that bullet.
Actually, looking back through the conversation, I’m also confused as to how time discounting helps in the case that one is acting like a Boltzmann brain—someone who knows they are a B-brain would discount quickly anyway due to short lifespan, wouldn’t extra time discounting make the situation worse? Specifically, if there are X B-brains for each ‘real’ brain, then if the real brain can survive more than X times as long as a B-brain, and doesn’t time discount, then the ‘real’ brain utility still is dominant.
I’m not sure what you mean here. Sets have measure, not functions.
I wasn’t being very precise with my wording—I meant that one would maximise the measure of whatever it is one values.
Hmm, baby universes are a possibility to consider. I thought the case for them is rather weak but a quick search revealed this. Regarding performing an infinite number of computations I’m pretty sure it doesn’t work.
Well, I have only a layman’s understanding of string theory, but if it were possible to ‘escape’ into a baby universe by creating a clone inside the universe, then the process can be repeated, leading to an uncountably infinite (!) tree of universes.
While I can see why there intuitive cause to abandon the “I am person #2, therefore there are probably not 100 people” reasoning, abandoning “There are 100 clones, therefore I’m probably not clone #1″ seems to be simply abandoning probability theory altogether, and I’m certainly not willing to bite that bullet.
I’m not entirely sure what you’re saying here. UDT suggests that subjective probabilities are meaningless (thus taking the third horn of the anthropic trilemma although it can be argued that selfish utility functions are still possible). “What is the probability I am clone #n” is not a meaningful question. “What is the (updated/posteriori) probability I am in a universe with property P” is not a meaningful question in general but has approximate meaning in contexts where anthropic considerations are irrelevant. “What is the a priori probability the universe has property P” is a question that might be meaningful but is probably also approximate since there is a freedom of redefining the prior and the utility function simultaneously (see this). The single fully meaningful type of question is “what is the expected utility I should assign to action A?” which is OK since it is the only question you have to answer in practice.
Actually, looking back through the conversation, I’m also confused as to how time discounting helps in the case that one is acting like a Boltzmann brain—someone who knows they are a B-brain would discount quickly anyway due to short lifespan, wouldn’t extra time discounting make the situation worse?
Boltzmann brains exist very far in the future wrt “normal” brains, therefore their contribution to utility is very small. The discount depends on absolute time.
I wasn’t being very precise with my wording—I meant that one would maximise the measure of whatever it is one values.
If “measure” here equals “probability wrt prior” (e.g. Solomonoff prior) then this is just another way to define a satisficing agent (utility equals either 0 or 1).
Well, I have only a layman’s understanding of string theory, but if it were possible to ‘escape’ into a baby universe by creating a clone inside the universe, then the process can be repeated, leading to an uncountably infinite (!) tree of universes.
Good point. Surely we need to understand these baby universes better.
In UDT you shouldn’t consider yourself to be just one of your clones. There is no probability measure on the set of your clones: you are all of them simultaneously. CDT is difficult to apply to situations with clones, unless you supplement it by some anthropic hypothesis like SIA or SSA. If you use an anthropic hypothesis, Boltzman brains will still get you in trouble. In fact, some cosmologists are trying to find models w/o Boltzman brains precise to avoid the conclusion that you are likely to be a Boltzman brain (although UDT shows the effort is misguided). The problem with UDT and Goedel incompleteness is a separate issue which has no relation to Boltzman brains.
I’m not sure what you mean here. Sets have measure, not functions.
Well, you still got all of the abovementioned problems except divergence.
Hmm, baby universes are a possibility to consider. I thought the case for them is rather weak but a quick search revealed this. Regarding performing an infinite number of computations I’m pretty sure it doesn’t work.
While I can see why there intuitive cause to abandon the “I am person #2, therefore there are probably not 100 people” reasoning, abandoning “There are 100 clones, therefore I’m probably not clone #1″ seems to be simply abandoning probability theory altogether, and I’m certainly not willing to bite that bullet.
Actually, looking back through the conversation, I’m also confused as to how time discounting helps in the case that one is acting like a Boltzmann brain—someone who knows they are a B-brain would discount quickly anyway due to short lifespan, wouldn’t extra time discounting make the situation worse? Specifically, if there are X B-brains for each ‘real’ brain, then if the real brain can survive more than X times as long as a B-brain, and doesn’t time discount, then the ‘real’ brain utility still is dominant.
I wasn’t being very precise with my wording—I meant that one would maximise the measure of whatever it is one values.
Well, I have only a layman’s understanding of string theory, but if it were possible to ‘escape’ into a baby universe by creating a clone inside the universe, then the process can be repeated, leading to an uncountably infinite (!) tree of universes.
I’m not entirely sure what you’re saying here. UDT suggests that subjective probabilities are meaningless (thus taking the third horn of the anthropic trilemma although it can be argued that selfish utility functions are still possible). “What is the probability I am clone #n” is not a meaningful question. “What is the (updated/posteriori) probability I am in a universe with property P” is not a meaningful question in general but has approximate meaning in contexts where anthropic considerations are irrelevant. “What is the a priori probability the universe has property P” is a question that might be meaningful but is probably also approximate since there is a freedom of redefining the prior and the utility function simultaneously (see this). The single fully meaningful type of question is “what is the expected utility I should assign to action A?” which is OK since it is the only question you have to answer in practice.
Boltzmann brains exist very far in the future wrt “normal” brains, therefore their contribution to utility is very small. The discount depends on absolute time.
If “measure” here equals “probability wrt prior” (e.g. Solomonoff prior) then this is just another way to define a satisficing agent (utility equals either 0 or 1).
Good point. Surely we need to understand these baby universes better.