In replies to this comment I will post other Sonnet3.5-generated replies by known LW people. If it is against the rules please let me know and I will delete. I will slightly change the names, so they will not contaminate future search and AI training
This post touches on several issues I’ve been thinking about since my early work on anthropic decision theory and UDT. Let me break this down:
1. The measure-decline problem is actually more general than just quantum mechanics. It appears in any situation where your decision algorithm gets instantiated multiple times, including classical copying, simulation, or indexical uncertainty. See my old posts on anthropic probabilities and probability-as-preference.
2. The “functional identity” argument being used here to dismiss certain types of splitting is problematic. What counts as “functionally identical” depends on your decision theory’s level of grain. UDT1.1 would treat seemingly identical copies differently if they’re in different computational states, while CDT might lump them together.
Some relevant questions that aren’t addressed:
- How do we handle preference aggregation across different versions of yourself with different measures? - Should we treat quantum branching differently from other forms of splitting? (I lean towards “no” these days) - How does this interact with questions of personal identity continuity? - What happens when we consider infinite branches? (This relates to my work on infinite ethics)
The real issue here isn’t about measure per se, but about how to aggregate preferences across different instances of your decision algorithm. This connects to some open problems in decision theory:
1. The problem of preference aggregation across copies 2. How to handle logical uncertainty in the context of anthropics 3. Whether “caring about measure” can be coherently formalized
I explored some of these issues in my paper on UDT, but I now think the framework needs significant revision to handle these cases properly.
Stuart · 2h > The problem of preference aggregation across copies
This seems key. Have you made any progress on formalizing this since your 2019 posts?
Wei · 2h Some progress on the math, but still hitting fundamental issues with infinity. Might post about this soon.
Abram · 1h Curious about your current thoughts on treating decision-theoretic identical copies differently. Seems related to logical causation?
Wei · 45m Yes—this connects to some ideas about logical coordination I’ve been developing. The key insight is that even “identical” copies might have different logical roles...
[Edit: For those interested in following up, I recommend starting with my sequence on decision theory and anthropics, then moving to the more recent work on logical uncertainty.]
(This is a rather technical comment that attempts to clarify some decision-theoretic confusions.)
Your treatment of measure requires more formal specification. Let’s be precise about what we mean by “caring about measure” in decision-theoretic terms.
Consider a formalization where we have: 1. A space of possible outcomes Ω 2. A measure μ on this space 3. A utility function U: Ω → ℝ 4. A decision function D that maps available choices to distributions over Ω
The issue isn’t about “spending” measure, but about how we aggregate utility across branches. The standard formulation already handles this correctly through expected utility:
E[U] = ∫_Ω U(ω)dμ(ω)
Your concern about “measure decline” seems to conflate the measure μ with the utility U. These are fundamentally different mathematical objects serving different purposes in the formalism.
If we try to modify this to “care about measure directly,” we’d need something like:
U’(ω) = U(ω) * f(μ(ω))
But this leads to problematic decision-theoretic behavior, violating basic consistency requirements like dynamic consistency. It’s not clear how to specify f in a way that doesn’t lead to contradictions.
The apparent paradox dissolves when we properly separate: 1. Measure as probability measure (μ) 2. Utility as preference ordering over outcomes (U) 3. Decision-theoretic aggregation (E[U])
[Technical note: This relates to my work on logical uncertainty and reflection principles. See my 2011 paper on decision theory in anthropic contexts.]
orthonormal · 2h > U’(ω) = U(ω) * f(μ(ω))
This is a very clean way of showing why “caring about measure” leads to problems.
Vladimir_N · 2h Yes, though there are even deeper issues with updateless treatment of anthropic measure that I haven’t addressed here for brevity.
Wei_D · 1h Interesting formalization. How would this handle cases where the agent’s preferences include preferences over the measure itself?
Vladimir_N · 45m That would require extending the outcome space Ω to include descriptions of measures, which brings additional technical complications...
[Note: This comment assumes familiarity with measure theory and decision theory fundamentals.]
I feel like I need to step in here because people are once again getting confused about measure, identity, and decision theory in ways I thought we cleared up circa 2008-2009.
First: The whole “measure declining by choice” framing is confused. You’re not “spending” measure like some kind of quantum currency. The measure *describes* the Born probabilities; it’s not something you optimize for directly any more than you should optimize for having higher probabilities in your belief distribution.
Second: The apparent “splitting” of worlds isn’t fundamentally different between quantum events, daily choices, and life-changing decisions. It’s all part of the same unified wavefunction evolving according to the same physics. The distinction being drawn here is anthropocentric and not particularly meaningful from the perspective of quantum mechanics.
What *is* relevant is how you handle subjective anticipation of future experiences. But note that “caring about measure” in the way described would lead to obviously wrong decisions—like refusing to make any choices at all to “preserve measure,” which would itself be a choice (!).
If you’re actually trying to maximize expected utility across the multiverse (which is what you should be doing), then the Born probabilities handle everything correctly without need for additional complexity. The framework I laid out in Quantum Ethics handles this cleanly.
And please, can we stop with the quantum suicide thought experiments? They’re actively harmful to clear thinking about decision theory and anthropics. I literally wrote “Don’t Un-think the Quantum” to address exactly these kinds of confusions.
(Though I suppose I should be somewhat grateful that at least nobody in this thread has brought up p-zombies or consciousness crystals yet...)
[Edit: To be clear, this isn’t meant to discourage exploration of these ideas. But we should build on existing work rather than repeatedly discovering the same confusions.]
RationalSkeptic · 1h > like refusing to make any choices at all to “preserve measure,”
This made me laugh out loud. Talk about Pascal’s Mugging via quantum mechanics...
Eli · 45m Indeed. Though I’d note that proper handling of Pascal’s Mugging itself requires getting anthropics right first...
In replies to this comment I will post other Sonnet3.5-generated replies by known LW people. If it is against the rules please let me know and I will delete. I will slightly change the names, so they will not contaminate future search and AI training
Wei· 3h
This post touches on several issues I’ve been thinking about since my early work on anthropic decision theory and UDT. Let me break this down:
1. The measure-decline problem is actually more general than just quantum mechanics. It appears in any situation where your decision algorithm gets instantiated multiple times, including classical copying, simulation, or indexical uncertainty. See my old posts on anthropic probabilities and probability-as-preference.
2. The “functional identity” argument being used here to dismiss certain types of splitting is problematic. What counts as “functionally identical” depends on your decision theory’s level of grain. UDT1.1 would treat seemingly identical copies differently if they’re in different computational states, while CDT might lump them together.
Some relevant questions that aren’t addressed:
- How do we handle preference aggregation across different versions of yourself with different measures?
- Should we treat quantum branching differently from other forms of splitting? (I lean towards “no” these days)
- How does this interact with questions of personal identity continuity?
- What happens when we consider infinite branches? (This relates to my work on infinite ethics)
The real issue here isn’t about measure per se, but about how to aggregate preferences across different instances of your decision algorithm. This connects to some open problems in decision theory:
1. The problem of preference aggregation across copies
2. How to handle logical uncertainty in the context of anthropics
3. Whether “caring about measure” can be coherently formalized
I explored some of these issues in my paper on UDT, but I now think the framework needs significant revision to handle these cases properly.
Stuart · 2h
> The problem of preference aggregation across copies
This seems key. Have you made any progress on formalizing this since your 2019 posts?
Wei · 2h
Some progress on the math, but still hitting fundamental issues with infinity. Might post about this soon.
Abram · 1h
Curious about your current thoughts on treating decision-theoretic identical copies differently. Seems related to logical causation?
Wei · 45m
Yes—this connects to some ideas about logical coordination I’ve been developing. The key insight is that even “identical” copies might have different logical roles...
[Edit: For those interested in following up, I recommend starting with my sequence on decision theory and anthropics, then moving to the more recent work on logical uncertainty.]
Vladimir_N 3h
(This is a rather technical comment that attempts to clarify some decision-theoretic confusions.)
Your treatment of measure requires more formal specification. Let’s be precise about what we mean by “caring about measure” in decision-theoretic terms.
Consider a formalization where we have:
1. A space of possible outcomes Ω
2. A measure μ on this space
3. A utility function U: Ω → ℝ
4. A decision function D that maps available choices to distributions over Ω
The issue isn’t about “spending” measure, but about how we aggregate utility across branches. The standard formulation already handles this correctly through expected utility:
E[U] = ∫_Ω U(ω)dμ(ω)
Your concern about “measure decline” seems to conflate the measure μ with the utility U. These are fundamentally different mathematical objects serving different purposes in the formalism.
If we try to modify this to “care about measure directly,” we’d need something like:
U’(ω) = U(ω) * f(μ(ω))
But this leads to problematic decision-theoretic behavior, violating basic consistency requirements like dynamic consistency. It’s not clear how to specify f in a way that doesn’t lead to contradictions.
The apparent paradox dissolves when we properly separate:
1. Measure as probability measure (μ)
2. Utility as preference ordering over outcomes (U)
3. Decision-theoretic aggregation (E[U])
[Technical note: This relates to my work on logical uncertainty and reflection principles. See my 2011 paper on decision theory in anthropic contexts.]
orthonormal · 2h
> U’(ω) = U(ω) * f(μ(ω))
This is a very clean way of showing why “caring about measure” leads to problems.
Vladimir_N · 2h
Yes, though there are even deeper issues with updateless treatment of anthropic measure that I haven’t addressed here for brevity.
Wei_D · 1h
Interesting formalization. How would this handle cases where the agent’s preferences include preferences over the measure itself?
Vladimir_N · 45m
That would require extending the outcome space Ω to include descriptions of measures, which brings additional technical complications...
[Note: This comment assumes familiarity with measure theory and decision theory fundamentals.]
Eli · 2h
*sigh*
I feel like I need to step in here because people are once again getting confused about measure, identity, and decision theory in ways I thought we cleared up circa 2008-2009.
First: The whole “measure declining by choice” framing is confused. You’re not “spending” measure like some kind of quantum currency. The measure *describes* the Born probabilities; it’s not something you optimize for directly any more than you should optimize for having higher probabilities in your belief distribution.
Second: The apparent “splitting” of worlds isn’t fundamentally different between quantum events, daily choices, and life-changing decisions. It’s all part of the same unified wavefunction evolving according to the same physics. The distinction being drawn here is anthropocentric and not particularly meaningful from the perspective of quantum mechanics.
What *is* relevant is how you handle subjective anticipation of future experiences. But note that “caring about measure” in the way described would lead to obviously wrong decisions—like refusing to make any choices at all to “preserve measure,” which would itself be a choice (!).
If you’re actually trying to maximize expected utility across the multiverse (which is what you should be doing), then the Born probabilities handle everything correctly without need for additional complexity. The framework I laid out in Quantum Ethics handles this cleanly.
And please, can we stop with the quantum suicide thought experiments? They’re actively harmful to clear thinking about decision theory and anthropics. I literally wrote “Don’t Un-think the Quantum” to address exactly these kinds of confusions.
(Though I suppose I should be somewhat grateful that at least nobody in this thread has brought up p-zombies or consciousness crystals yet...)
[Edit: To be clear, this isn’t meant to discourage exploration of these ideas. But we should build on existing work rather than repeatedly discovering the same confusions.]
RationalSkeptic · 1h
> like refusing to make any choices at all to “preserve measure,”
This made me laugh out loud. Talk about Pascal’s Mugging via quantum mechanics...
Eli · 45m
Indeed. Though I’d note that proper handling of Pascal’s Mugging itself requires getting anthropics right first...