As the tool’s decision in this thought experiment is made invariant on the tool’s settings (“utility” and prior), showing that the tool’s decision is wrong according to a person’s preference (after “careful” reflection), proves that there is no way to set up “utility”
My argument is that, if Omega is offering to double vNM utility, the set-up of the thought experiment rules out the possibility that the decision could be wrong according to a person’s considered preference (because the claim to be doubling vNM utility embodies an assumption about what a person’s considered preference is). AFAICT, the thought experiment then amounts to asking: “If I should maximize expected utility, should I maximize expected utility?” Regardless of whether I should actually maximize expected utility or not, the correct answer to this question is still “yes”. But the thought experiment is completely uninformative.
Do you understand my argument for this conclusion? (Fourth para of my previous comment.) If you do, can you point out where you think it goes astray? If you don’t, could you tell me what part you don’t understand so I can try to clarify my thinking?
On the other hand, if Omega is offering to double something other than vNM utility (hedons/valutilons/whatever) then I don’t think we have any disagreement. (Do we? Do you disagree with anything I said in para 5 of my previous comment?)
My point is just that the thought experiment is underspecified unless we’re clear about what the doubling applies to, and that people sometimes seem to shift back and forth between different meanings.
What was originally at issue is whether we should act in ways that will eventually destroy ourselves.
I think the big-picture conclusion from what you just wrote is that, if we see that we’re acting in ways that will probably exterminate life in short order, that doesn’t necessarily mean it’s the wrong thing to do.
However, in our circumstances, time discounting and “identity discounting” encourage us to start enjoying and dooming ourselves now; whereas it would probably be better to spread life to a few other galaxies first, and then enjoy ourselves.
(I admit that my use of the word “better” is problematic.)
if we see that we’re acting in ways that will probably exterminate life in short order, that doesn’t necessarily mean it’s the wrong thing to do.
Well, I don’t disagree with this, but I would still agree with it if you substituted “right” for “wrong”, so it doesn’t seem like much of a conclusion. ;)
Moving back toward your ignorance prior on a topic can still increase your log-score if the hypothesis was concentrating probability mass in the wrong areas (failing to concentrate a substantial amount in a right area).
You argue that the thought experiment is trivial and doesn’t solve any problems. In my comments above I described a specific setup that shows how to use (interpret) the thought experiment to potentially obtain non-trivial results.
I argue that the thought experiment is ambiguous, and that for a certain definition of utility (vNM utility), it is trivial and doesn’t solve any problems. For this definition of utility I argue that your example doesn’t work. You do not appear to have engaged with this argument, despite repeated requests to point out either where it goes wrong, or where it is unclear. If it goes wrong, I want to know why, but this conversation isn’t really helping.
For other definitions of utility, I do not, and have never claimed that the thought experiment is trivial. In fact, I think it is very interesting.
I argue that the thought experiment is ambiguous, and that for a certain definition of utility (vNM utility), it is trivial and doesn’t solve any problems. For this definition of utility I argue that your example doesn’t work.
If by “your example” you refer to the setup described in this comment, I don’t understand what you are saying here. I don’t use any “definition of utility”, it’s just a parameter of the tool.
It’s also an entity in the problem set-up. When Omega says “I’ll double your utility”, what is she offering to double? Without defining this, the problem isn’t well-specified.
My argument is that, if Omega is offering to double vNM utility, the set-up of the thought experiment rules out the possibility that the decision could be wrong according to a person’s considered preference (because the claim to be doubling vNM utility embodies an assumption about what a person’s considered preference is). AFAICT, the thought experiment then amounts to asking: “If I should maximize expected utility, should I maximize expected utility?” Regardless of whether I should actually maximize expected utility or not, the correct answer to this question is still “yes”. But the thought experiment is completely uninformative.
Do you understand my argument for this conclusion? (Fourth para of my previous comment.) If you do, can you point out where you think it goes astray? If you don’t, could you tell me what part you don’t understand so I can try to clarify my thinking?
On the other hand, if Omega is offering to double something other than vNM utility (hedons/valutilons/whatever) then I don’t think we have any disagreement. (Do we? Do you disagree with anything I said in para 5 of my previous comment?)
My point is just that the thought experiment is underspecified unless we’re clear about what the doubling applies to, and that people sometimes seem to shift back and forth between different meanings.
What you just said seems correct.
What was originally at issue is whether we should act in ways that will eventually destroy ourselves.
I think the big-picture conclusion from what you just wrote is that, if we see that we’re acting in ways that will probably exterminate life in short order, that doesn’t necessarily mean it’s the wrong thing to do.
However, in our circumstances, time discounting and “identity discounting” encourage us to start enjoying and dooming ourselves now; whereas it would probably be better to spread life to a few other galaxies first, and then enjoy ourselves.
(I admit that my use of the word “better” is problematic.)
Well, I don’t disagree with this, but I would still agree with it if you substituted “right” for “wrong”, so it doesn’t seem like much of a conclusion. ;)
Moving back toward your ignorance prior on a topic can still increase your log-score if the hypothesis was concentrating probability mass in the wrong areas (failing to concentrate a substantial amount in a right area).
You argue that the thought experiment is trivial and doesn’t solve any problems. In my comments above I described a specific setup that shows how to use (interpret) the thought experiment to potentially obtain non-trivial results.
I argue that the thought experiment is ambiguous, and that for a certain definition of utility (vNM utility), it is trivial and doesn’t solve any problems. For this definition of utility I argue that your example doesn’t work. You do not appear to have engaged with this argument, despite repeated requests to point out either where it goes wrong, or where it is unclear. If it goes wrong, I want to know why, but this conversation isn’t really helping.
For other definitions of utility, I do not, and have never claimed that the thought experiment is trivial. In fact, I think it is very interesting.
If by “your example” you refer to the setup described in this comment, I don’t understand what you are saying here. I don’t use any “definition of utility”, it’s just a parameter of the tool.
It’s also an entity in the problem set-up. When Omega says “I’ll double your utility”, what is she offering to double? Without defining this, the problem isn’t well-specified.
Certainly, you need to resolve any underspecification. There are ways to do this usefully (or not).
Agreed. My point is simply that one particular (tempting) way of resolving the underspecification is non-useful. ;)