Standard expected utility maximization requires a probability distribution, but the problem is that is anthropic scenarios it is not obvious what the correct distribution is and how to correctly update it.
If it was solved in a way that made it obvious for, say, the Sleeping Beauty problem, would that then be the right way to do it?
all the solutions in the paper that disagree with yours actually are what you would want to precommit to given the associated utility function and are therefore correct.
I think you’re just making up utility functions here—is a real utility function (that is, a function of the state of the world) ever calculated in the paper, other than the use of the individual utility function? And we’re talking about regular ol’ utility functions, why are ADT’s decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?
If it was solved in a way that made it obvious for, say, the Sleeping Beauty problem, would that then be the right way to do it?
I would tentatively agree. To some extent the problem is one of choosing what it means for a distribution to be correct. I think that this is what Stuart’s ADT does (though I don’t think it’s a full solution to this).
You would also still need to account for acausal influence. Just picking a satisfactory probability distribution doesn’t ensure that you will one box on Newcomb’s problem, for example.
I think you’re just making up utility functions here—is a real utility function (that is, a function of the state of the world) ever calculated in the paper, other than the use of the individual utility function?
Is this quote what you had in mind? It seems like calculating a utility function to me, but I’m not sure what you mean by “other than the use of the individual utility function”.
In the tails world, future copies of myself will be offered the same deal twice. Any profit they make will be dedicated to hugging orphans/drowning kittens, so from my perspective, profits (and losses) will be doubled in the tails world. If my future copies will buy the coupon for £x, there would be an expected £0.5(2 × (−x + 1) + 1 × (−x + 0)) = £(1 − 3/2x) going towards my goal. Hence I would want my copies to buy whenever x < 2⁄3.
That is from page 7 of the paper.
And we’re talking about regular ol’ utility functions, why are ADT’s decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?
They’re not necessarily invariant under such changes. All the examples in the paper were, but that’s because they all used rather simple utility functions.
And if we’re talking about regular ol’ utility functions, why are ADT’s decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?
They’re not necessarily invariant under such changes. All the examples in the paper were, but that’s because they all used rather simple utility functions.
Hm, yes, you’re right about that.
Anyhow, I’m done here—I think you’ve gotten enough repetitions of my claim that if you’re not using probabilities, you’re not doing expected utility :) (okay, that was an oversimplification)
If it was solved in a way that made it obvious for, say, the Sleeping Beauty problem, would that then be the right way to do it?
I think you’re just making up utility functions here—is a real utility function (that is, a function of the state of the world) ever calculated in the paper, other than the use of the individual utility function? And we’re talking about regular ol’ utility functions, why are ADT’s decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?
I would tentatively agree. To some extent the problem is one of choosing what it means for a distribution to be correct. I think that this is what Stuart’s ADT does (though I don’t think it’s a full solution to this).
You would also still need to account for acausal influence. Just picking a satisfactory probability distribution doesn’t ensure that you will one box on Newcomb’s problem, for example.
Is this quote what you had in mind? It seems like calculating a utility function to me, but I’m not sure what you mean by “other than the use of the individual utility function”.
That is from page 7 of the paper.
They’re not necessarily invariant under such changes. All the examples in the paper were, but that’s because they all used rather simple utility functions.
Hm, yes, you’re right about that.
Anyhow, I’m done here—I think you’ve gotten enough repetitions of my claim that if you’re not using probabilities, you’re not doing expected utility :) (okay, that was an oversimplification)