I don’t understand. Fixed utility function doesn’t equal unfixed utility function, as optimizing for them leads to different outcomes.
Edit: you mean, that we cannot optimize for unfixed utility function? In second part of article I’ve tried to demonstrate that meaning of optimization according to utility function should be a part of utility function itself, as otherwise result of optimization depends on optimization algorithm too, thus making utility function insufficient to describe everything one cares about.
I don’t mean that at all. Given that we have a utility function U() over states of the world, Nesov’s argument is essentially that:
U(future given that we make sure our descendants have the same utility function as us) >> U(future given that we let our descendants’ utility functions drift away from ours)
Where “>>” means astronomically more. There is no comparison of utilities across utility functions.
Future is not a world-state, it is a sequence of world-states. Thus your statement must be reformulated somehow.
Either (1) we must define utility function over a set of (valid) sequences of world-states or (2) we must define what it means that sequence of world-states is optimized for given U, [edit] and that means that this definition should be a part of U itself as U is all we care about. [/edit]
And option 1 is either impossible if rules of world don’t permit an agent to hold full history of world or we can define equivalent utility function over world-states, thus leaving only option 2 as viable choice.
Then your statement means either
For all world-states x in the sequence of world-states optimized for U U(x)>U(y), where y doesn’t belong to the sequence of world-states optimized for U. And that means we must know in advance which future world-states are reachable.
or
U(x)>U(y) for all world-states x in the sequence of world-states optimized for U, for all world-states y in the sequence of world-states optimized for some U2. But U2(x)<U2(y).
However it is not a main point of my post. Main point is that future optimization isn’t necessarily maximizing a fixed function we know in advance.
Edit: I don’t really argue with Vladimir, as future optimization as utility maximization can be a part of his value function, and arguing about values per se is pointless. But maybe he misinterprets what he really values.
I don’t understand. Fixed utility function doesn’t equal unfixed utility function, as optimizing for them leads to different outcomes.
Edit: you mean, that we cannot optimize for unfixed utility function? In second part of article I’ve tried to demonstrate that meaning of optimization according to utility function should be a part of utility function itself, as otherwise result of optimization depends on optimization algorithm too, thus making utility function insufficient to describe everything one cares about.
I don’t mean that at all. Given that we have a utility function U() over states of the world, Nesov’s argument is essentially that:
U(future given that we make sure our descendants have the same utility function as us) >> U(future given that we let our descendants’ utility functions drift away from ours)
Where “>>” means astronomically more. There is no comparison of utilities across utility functions.
Future is not a world-state, it is a sequence of world-states. Thus your statement must be reformulated somehow.
Either (1) we must define utility function over a set of (valid) sequences of world-states or (2) we must define what it means that sequence of world-states is optimized for given U, [edit] and that means that this definition should be a part of U itself as U is all we care about. [/edit]
And option 1 is either impossible if rules of world don’t permit an agent to hold full history of world or we can define equivalent utility function over world-states, thus leaving only option 2 as viable choice.
Then your statement means either
For all world-states x in the sequence of world-states optimized for U U(x)>U(y), where y doesn’t belong to the sequence of world-states optimized for U. And that means we must know in advance which future world-states are reachable.
or
U(x)>U(y) for all world-states x in the sequence of world-states optimized for U, for all world-states y in the sequence of world-states optimized for some U2. But U2(x)<U2(y).
However it is not a main point of my post. Main point is that future optimization isn’t necessarily maximizing a fixed function we know in advance.
Edit: I don’t really argue with Vladimir, as future optimization as utility maximization can be a part of his value function, and arguing about values per se is pointless. But maybe he misinterprets what he really values.