There is an argument against quantum immortality that while I survive, I have lower measure in the multiverse and thus lower impact on it, which suggests I should not care about quantum immortality.
However, if we care about measure, there are normal situations where measure declines but we don’t typically care:
Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second. This may be ignored as resulting minds are functionally the same and can be regarded as one.
My semi-random actions during the day split me into similar but slightly different minds. This may also be ignored as most such differences will be forgotten, and the minds will be functionally the same.
I make semi-random choices which affect my entire future life. Examples:
Dating choices
Choosing another country to move to
Clicking job advertisements
The expected utility of all reasonable variants is approximately the same—I won’t choose a very bad job, for instance. So in a normal world, I don’t lose utility by randomly choosing between equal variants. However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I’m caring about something unobservable.
TLDR: If I care about declining measure, normal life events incur additional utility costs, which nevertheless don’t have observable consequences.
AI-generated comment section:
ShminuxRational · 4h Interesting point about measure decline in everyday choices. However, I think there’s a flaw in treating all branches as equally weighted. Wouldn’t decoherence rates and environment interaction mean some branches have naturally higher measure? This seems relevant for the job-choice example.
MaximizerPrime · 3h > Wouldn’t decoherence rates and environment interaction mean some branches have naturally higher measure?
This. Plus, we should consider that decision-theoretic framework might need updating when dealing with measure. UDT might handle this differently than EDT.
quantumCrux · 4h Your point about the 20 orders of magnitude per second is fascinating. Has anyone actually calculated the exact rate of quantum branching? Seems like an important consideration for anthropic reasoning.
PatternSeeker · 3h This reminds me of Stuart Armstrong’s posts about identity and measure. I wonder if we’re making category error by treating measure as something to “spend” rather than as a description of our uncertainty about which branch we’ll end up in.
DecisionTheoryNerd · 3h You might want to look into Wei Dai’s work on anthropic decision theory. This seems related to the problem of sleeping beauty and probability allocation across multiple instances of yourself.
AlignmentScholar · 2h The sleeping beauty analogy is apt. Though I’d argue this is closer to SSA than SIA territory.
PracticalRationalist · 2h While intellectually interesting, I’m not convinced this has practical implications. If the decline in measure is truly unobservable, shouldn’t we apply Occam’s razor and ignore it? Seems like adding unnecessary complexity to our decision-making.
MetaUtilitarian · 1h Strong upvote. We should be careful about adding decision-theoretic complexity without corresponding benefits in expected value.
EpistemicStatus · 1h [Meta] The post could benefit from more formal notation, especially when discussing measure ratios. Also, have you considered cross-posting this to the Alignment Forum? Seems relevant to questions about agent foundations.
QuantumBayesian · 1h This makes me wonder about the relationship between quantum suicide experiments and everyday choices. Are we performing micro quantum suicide experiments every time we make a decision? 🤔
RationalSkeptic · 30m Please let’s not go down the quantum suicide path again. We had enough debates about this in 2011.
ComputationalFog · 15m Has anyone written code to simulate this kind of measure-aware decision making? Might be interesting to see how different utility functions handle it.
If I care about measure, choices have additional burden (+AI generated LW-comments)
There is an argument against quantum immortality that while I survive, I have lower measure in the multiverse and thus lower impact on it, which suggests I should not care about quantum immortality.
However, if we care about measure, there are normal situations where measure declines but we don’t typically care:
Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second. This may be ignored as resulting minds are functionally the same and can be regarded as one.
My semi-random actions during the day split me into similar but slightly different minds. This may also be ignored as most such differences will be forgotten, and the minds will be functionally the same.
I make semi-random choices which affect my entire future life. Examples:
Dating choices
Choosing another country to move to
Clicking job advertisements
The expected utility of all reasonable variants is approximately the same—I won’t choose a very bad job, for instance. So in a normal world, I don’t lose utility by randomly choosing between equal variants. However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I’m caring about something unobservable.
TLDR: If I care about declining measure, normal life events incur additional utility costs, which nevertheless don’t have observable consequences.
AI-generated comment section:
ShminuxRational · 4h
Interesting point about measure decline in everyday choices. However, I think there’s a flaw in treating all branches as equally weighted. Wouldn’t decoherence rates and environment interaction mean some branches have naturally higher measure? This seems relevant for the job-choice example.
MaximizerPrime · 3h
> Wouldn’t decoherence rates and environment interaction mean some branches have naturally higher measure?
This. Plus, we should consider that decision-theoretic framework might need updating when dealing with measure. UDT might handle this differently than EDT.
quantumCrux · 4h
Your point about the 20 orders of magnitude per second is fascinating. Has anyone actually calculated the exact rate of quantum branching? Seems like an important consideration for anthropic reasoning.
PatternSeeker · 3h
This reminds me of Stuart Armstrong’s posts about identity and measure. I wonder if we’re making category error by treating measure as something to “spend” rather than as a description of our uncertainty about which branch we’ll end up in.
DecisionTheoryNerd · 3h
You might want to look into Wei Dai’s work on anthropic decision theory. This seems related to the problem of sleeping beauty and probability allocation across multiple instances of yourself.
AlignmentScholar · 2h
The sleeping beauty analogy is apt. Though I’d argue this is closer to SSA than SIA territory.
PracticalRationalist · 2h
While intellectually interesting, I’m not convinced this has practical implications. If the decline in measure is truly unobservable, shouldn’t we apply Occam’s razor and ignore it? Seems like adding unnecessary complexity to our decision-making.
MetaUtilitarian · 1h
Strong upvote. We should be careful about adding decision-theoretic complexity without corresponding benefits in expected value.
EpistemicStatus · 1h
[Meta] The post could benefit from more formal notation, especially when discussing measure ratios. Also, have you considered cross-posting this to the Alignment Forum? Seems relevant to questions about agent foundations.
QuantumBayesian · 1h
This makes me wonder about the relationship between quantum suicide experiments and everyday choices. Are we performing micro quantum suicide experiments every time we make a decision? 🤔
RationalSkeptic · 30m
Please let’s not go down the quantum suicide path again. We had enough debates about this in 2011.
ComputationalFog · 15m
Has anyone written code to simulate this kind of measure-aware decision making? Might be interesting to see how different utility functions handle it.