When I’ve read it I found that I think about units of measurement of mentioned astronomical waste. Utilons? Seems so.
I’ve tried to precisely define it. It is difference between utility of some world-state G measured by original (drifting) agent and utility of world-state G measured by undrifting version of original agent, where world-state G is optimal according to original (drifting) agent.
There are two questions: can we compare utilities of those agents and what does it mean that G is optimal?
The comparison isn’t between two different utility functions, it’s between the utility of two different scenarios as measured by the same utility function. What Nesov is arguing is that, given whatever utility function you have now, if you don’t try to fix that utility function for yourself and all of your descendants, you will miss out on an extremely large amount of utility as measured by that utility function. Since, by definition, your current utility function is everything you care about right now, this is a really bad thing.
I don’t understand. Fixed utility function doesn’t equal unfixed utility function, as optimizing for them leads to different outcomes.
Edit: you mean, that we cannot optimize for unfixed utility function? In second part of article I’ve tried to demonstrate that meaning of optimization according to utility function should be a part of utility function itself, as otherwise result of optimization depends on optimization algorithm too, thus making utility function insufficient to describe everything one cares about.
I don’t mean that at all. Given that we have a utility function U() over states of the world, Nesov’s argument is essentially that:
U(future given that we make sure our descendants have the same utility function as us) >> U(future given that we let our descendants’ utility functions drift away from ours)
Where “>>” means astronomically more. There is no comparison of utilities across utility functions.
Future is not a world-state, it is a sequence of world-states. Thus your statement must be reformulated somehow.
Either (1) we must define utility function over a set of (valid) sequences of world-states or (2) we must define what it means that sequence of world-states is optimized for given U, [edit] and that means that this definition should be a part of U itself as U is all we care about. [/edit]
And option 1 is either impossible if rules of world don’t permit an agent to hold full history of world or we can define equivalent utility function over world-states, thus leaving only option 2 as viable choice.
Then your statement means either
For all world-states x in the sequence of world-states optimized for U U(x)>U(y), where y doesn’t belong to the sequence of world-states optimized for U. And that means we must know in advance which future world-states are reachable.
or
U(x)>U(y) for all world-states x in the sequence of world-states optimized for U, for all world-states y in the sequence of world-states optimized for some U2. But U2(x)<U2(y).
However it is not a main point of my post. Main point is that future optimization isn’t necessarily maximizing a fixed function we know in advance.
Edit: I don’t really argue with Vladimir, as future optimization as utility maximization can be a part of his value function, and arguing about values per se is pointless. But maybe he misinterprets what he really values.
The comparison isn’t between two different utility functions, it’s between the utility of two different scenarios as measured by the same utility function. What Nesov is arguing is that, given whatever utility function you have now, if you don’t try to fix that utility function for yourself and all of your descendants, you will miss out on an extremely large amount of utility as measured by that utility function. Since, by definition, your current utility function is everything you care about right now, this is a really bad thing.
I don’t understand. Fixed utility function doesn’t equal unfixed utility function, as optimizing for them leads to different outcomes.
Edit: you mean, that we cannot optimize for unfixed utility function? In second part of article I’ve tried to demonstrate that meaning of optimization according to utility function should be a part of utility function itself, as otherwise result of optimization depends on optimization algorithm too, thus making utility function insufficient to describe everything one cares about.
I don’t mean that at all. Given that we have a utility function U() over states of the world, Nesov’s argument is essentially that:
U(future given that we make sure our descendants have the same utility function as us) >> U(future given that we let our descendants’ utility functions drift away from ours)
Where “>>” means astronomically more. There is no comparison of utilities across utility functions.
Future is not a world-state, it is a sequence of world-states. Thus your statement must be reformulated somehow.
Either (1) we must define utility function over a set of (valid) sequences of world-states or (2) we must define what it means that sequence of world-states is optimized for given U, [edit] and that means that this definition should be a part of U itself as U is all we care about. [/edit]
And option 1 is either impossible if rules of world don’t permit an agent to hold full history of world or we can define equivalent utility function over world-states, thus leaving only option 2 as viable choice.
Then your statement means either
For all world-states x in the sequence of world-states optimized for U U(x)>U(y), where y doesn’t belong to the sequence of world-states optimized for U. And that means we must know in advance which future world-states are reachable.
or
U(x)>U(y) for all world-states x in the sequence of world-states optimized for U, for all world-states y in the sequence of world-states optimized for some U2. But U2(x)<U2(y).
However it is not a main point of my post. Main point is that future optimization isn’t necessarily maximizing a fixed function we know in advance.
Edit: I don’t really argue with Vladimir, as future optimization as utility maximization can be a part of his value function, and arguing about values per se is pointless. But maybe he misinterprets what he really values.