(This is a rather technical comment that attempts to clarify some decision-theoretic confusions.)
Your treatment of measure requires more formal specification. Let’s be precise about what we mean by “caring about measure” in decision-theoretic terms.
Consider a formalization where we have: 1. A space of possible outcomes Ω 2. A measure μ on this space 3. A utility function U: Ω → ℝ 4. A decision function D that maps available choices to distributions over Ω
The issue isn’t about “spending” measure, but about how we aggregate utility across branches. The standard formulation already handles this correctly through expected utility:
E[U] = ∫_Ω U(ω)dμ(ω)
Your concern about “measure decline” seems to conflate the measure μ with the utility U. These are fundamentally different mathematical objects serving different purposes in the formalism.
If we try to modify this to “care about measure directly,” we’d need something like:
U’(ω) = U(ω) * f(μ(ω))
But this leads to problematic decision-theoretic behavior, violating basic consistency requirements like dynamic consistency. It’s not clear how to specify f in a way that doesn’t lead to contradictions.
The apparent paradox dissolves when we properly separate: 1. Measure as probability measure (μ) 2. Utility as preference ordering over outcomes (U) 3. Decision-theoretic aggregation (E[U])
[Technical note: This relates to my work on logical uncertainty and reflection principles. See my 2011 paper on decision theory in anthropic contexts.]
orthonormal · 2h > U’(ω) = U(ω) * f(μ(ω))
This is a very clean way of showing why “caring about measure” leads to problems.
Vladimir_N · 2h Yes, though there are even deeper issues with updateless treatment of anthropic measure that I haven’t addressed here for brevity.
Wei_D · 1h Interesting formalization. How would this handle cases where the agent’s preferences include preferences over the measure itself?
Vladimir_N · 45m That would require extending the outcome space Ω to include descriptions of measures, which brings additional technical complications...
[Note: This comment assumes familiarity with measure theory and decision theory fundamentals.]
Vladimir_N 3h
(This is a rather technical comment that attempts to clarify some decision-theoretic confusions.)
Your treatment of measure requires more formal specification. Let’s be precise about what we mean by “caring about measure” in decision-theoretic terms.
Consider a formalization where we have:
1. A space of possible outcomes Ω
2. A measure μ on this space
3. A utility function U: Ω → ℝ
4. A decision function D that maps available choices to distributions over Ω
The issue isn’t about “spending” measure, but about how we aggregate utility across branches. The standard formulation already handles this correctly through expected utility:
E[U] = ∫_Ω U(ω)dμ(ω)
Your concern about “measure decline” seems to conflate the measure μ with the utility U. These are fundamentally different mathematical objects serving different purposes in the formalism.
If we try to modify this to “care about measure directly,” we’d need something like:
U’(ω) = U(ω) * f(μ(ω))
But this leads to problematic decision-theoretic behavior, violating basic consistency requirements like dynamic consistency. It’s not clear how to specify f in a way that doesn’t lead to contradictions.
The apparent paradox dissolves when we properly separate:
1. Measure as probability measure (μ)
2. Utility as preference ordering over outcomes (U)
3. Decision-theoretic aggregation (E[U])
[Technical note: This relates to my work on logical uncertainty and reflection principles. See my 2011 paper on decision theory in anthropic contexts.]
orthonormal · 2h
> U’(ω) = U(ω) * f(μ(ω))
This is a very clean way of showing why “caring about measure” leads to problems.
Vladimir_N · 2h
Yes, though there are even deeper issues with updateless treatment of anthropic measure that I haven’t addressed here for brevity.
Wei_D · 1h
Interesting formalization. How would this handle cases where the agent’s preferences include preferences over the measure itself?
Vladimir_N · 45m
That would require extending the outcome space Ω to include descriptions of measures, which brings additional technical complications...
[Note: This comment assumes familiarity with measure theory and decision theory fundamentals.]