(The original version does not provide this kind of ‘game me’ exhortation.)
It does: if you love disillusionment, then your approach could be the same.
But with the possible exception of formalised mathematics, there is nothing that one person can say to another that cannot be “gamed”. (I confidently expect that the instant reaction of most readers of LessWrong to that statement will be to try to think up an exception.)
there is nothing that one person can say to another that cannot be “gamed”
This expression of my desire cannot be gamed.
(Self-reference might need to be included along with formalised mathematics. Arguably the sentence is not gameable because it becomes meaningless if gamed and meaningless sentences can’t be gamed)
But with the possible exception of formalised mathematics, there is nothing that one person can say to another that cannot be “gamed”. (I confidently expect that the instant reaction of most readers of LessWrong to that statement will be to try to think up an exception.)
All maximizations are going to take place in the future: they already haven’t taken place up to the present.
Your complaint is that “that thing”, the future you, isn’t similar enough to the present you. Fair enough. It’s hard to say anything about maximizing your utility as it is now if we assume zero knowledge about your utility function.
It does: if you love disillusionment, then your approach could be the same.
But with the possible exception of formalised mathematics, there is nothing that one person can say to another that cannot be “gamed”. (I confidently expect that the instant reaction of most readers of LessWrong to that statement will be to try to think up an exception.)
This expression of my desire cannot be gamed.
(Self-reference might need to be included along with formalised mathematics. Arguably the sentence is not gameable because it becomes meaningless if gamed and meaningless sentences can’t be gamed)
Maximise my utility!
I will brainwash you into representing your utility by a number on a piece of paper. Then I will write ∞ on it.
That isn’t maximising my utility. That is maximising the utility of some other thing in the future.
All maximizations are going to take place in the future: they already haven’t taken place up to the present.
Your complaint is that “that thing”, the future you, isn’t similar enough to the present you. Fair enough. It’s hard to say anything about maximizing your utility as it is now if we assume zero knowledge about your utility function.