Simpler? Hm. Well, I’m still thinking about that one.
Anyhow, by over-specified I mean that ADT and conventional expected-utility maximization (which I implicitly assumed to come with the utility function) can give different answers. For example, in a non-cooperative problem like copying someone either once or 10^9 times, and then giving the copy a candybar if it can correctly guess how many of them there are. The utility function already gives an answer, and no desiderata are given that show why that’s wrong—in fact, it’s one of the multiple possible answers laid out.
Simpler in that you don’t need to transform it before it is useful here.
Standard expected utility maximization requires a probability distribution, but the problem is that in anthropic scenarios it is not obvious what the correct distribution is and how to correctly update it. ADT uses the prior distribution before ‘observing one’s own existence’, so it circumvents the need to preform anthropic updates.
I’m not sure which solution to your candybar problem you think is correct because I am not sure which probability distribution you think is correct, but all the solutions in the paper that disagree with yours actually are what you would want to precommit to given the associated utility function and are therefore correct.
Standard expected utility maximization requires a probability distribution, but the problem is that is anthropic scenarios it is not obvious what the correct distribution is and how to correctly update it.
If it was solved in a way that made it obvious for, say, the Sleeping Beauty problem, would that then be the right way to do it?
all the solutions in the paper that disagree with yours actually are what you would want to precommit to given the associated utility function and are therefore correct.
I think you’re just making up utility functions here—is a real utility function (that is, a function of the state of the world) ever calculated in the paper, other than the use of the individual utility function? And we’re talking about regular ol’ utility functions, why are ADT’s decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?
If it was solved in a way that made it obvious for, say, the Sleeping Beauty problem, would that then be the right way to do it?
I would tentatively agree. To some extent the problem is one of choosing what it means for a distribution to be correct. I think that this is what Stuart’s ADT does (though I don’t think it’s a full solution to this).
You would also still need to account for acausal influence. Just picking a satisfactory probability distribution doesn’t ensure that you will one box on Newcomb’s problem, for example.
I think you’re just making up utility functions here—is a real utility function (that is, a function of the state of the world) ever calculated in the paper, other than the use of the individual utility function?
Is this quote what you had in mind? It seems like calculating a utility function to me, but I’m not sure what you mean by “other than the use of the individual utility function”.
In the tails world, future copies of myself will be offered the same deal twice. Any profit they make will be dedicated to hugging orphans/drowning kittens, so from my perspective, profits (and losses) will be doubled in the tails world. If my future copies will buy the coupon for £x, there would be an expected £0.5(2 × (−x + 1) + 1 × (−x + 0)) = £(1 − 3/2x) going towards my goal. Hence I would want my copies to buy whenever x < 2⁄3.
That is from page 7 of the paper.
And we’re talking about regular ol’ utility functions, why are ADT’s decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?
They’re not necessarily invariant under such changes. All the examples in the paper were, but that’s because they all used rather simple utility functions.
And if we’re talking about regular ol’ utility functions, why are ADT’s decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?
They’re not necessarily invariant under such changes. All the examples in the paper were, but that’s because they all used rather simple utility functions.
Hm, yes, you’re right about that.
Anyhow, I’m done here—I think you’ve gotten enough repetitions of my claim that if you’re not using probabilities, you’re not doing expected utility :) (okay, that was an oversimplification)
Simpler? Hm. Well, I’m still thinking about that one.
Anyhow, by over-specified I mean that ADT and conventional expected-utility maximization (which I implicitly assumed to come with the utility function) can give different answers. For example, in a non-cooperative problem like copying someone either once or 10^9 times, and then giving the copy a candybar if it can correctly guess how many of them there are. The utility function already gives an answer, and no desiderata are given that show why that’s wrong—in fact, it’s one of the multiple possible answers laid out.
Simpler in that you don’t need to transform it before it is useful here.
Standard expected utility maximization requires a probability distribution, but the problem is that in anthropic scenarios it is not obvious what the correct distribution is and how to correctly update it. ADT uses the prior distribution before ‘observing one’s own existence’, so it circumvents the need to preform anthropic updates.
I’m not sure which solution to your candybar problem you think is correct because I am not sure which probability distribution you think is correct, but all the solutions in the paper that disagree with yours actually are what you would want to precommit to given the associated utility function and are therefore correct.
If it was solved in a way that made it obvious for, say, the Sleeping Beauty problem, would that then be the right way to do it?
I think you’re just making up utility functions here—is a real utility function (that is, a function of the state of the world) ever calculated in the paper, other than the use of the individual utility function? And we’re talking about regular ol’ utility functions, why are ADT’s decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?
I would tentatively agree. To some extent the problem is one of choosing what it means for a distribution to be correct. I think that this is what Stuart’s ADT does (though I don’t think it’s a full solution to this).
You would also still need to account for acausal influence. Just picking a satisfactory probability distribution doesn’t ensure that you will one box on Newcomb’s problem, for example.
Is this quote what you had in mind? It seems like calculating a utility function to me, but I’m not sure what you mean by “other than the use of the individual utility function”.
That is from page 7 of the paper.
They’re not necessarily invariant under such changes. All the examples in the paper were, but that’s because they all used rather simple utility functions.
Hm, yes, you’re right about that.
Anyhow, I’m done here—I think you’ve gotten enough repetitions of my claim that if you’re not using probabilities, you’re not doing expected utility :) (okay, that was an oversimplification)