I don’t think I agree with this. Take the stars example for instance. How do you actually know it’s a huge change? Sure, maybe if you had a infinitely powerful computer you could compute the distance between the full description of the universe in these two states and find that it’s more distant than a relative of yours dying. But agents don’t work like this.
Agents have an internal representation of the world, and if they are anything useful at all I think that representation will closely match our intuition about what matters and what doesn’t. An useful agent won’t give any weight to the air atoms it displaces while moving, even though it might be considered “a huge change”, because it doesn’t actually affect it’s utility. But if it considers human are an important part of the world, so important that it may need to kill us to attain it’s goals, then it’s going to have a meaningful world-state representation giving a lot of weight to humans, and that gives us an useful impact measure for free.
I don’t think I agree with this. Take the stars example for instance. How do you actually know it’s a huge change? Sure, maybe if you had a infinitely powerful computer you could compute the distance between the full description of the universe in these two states and find that it’s more distant than a relative of yours dying. But agents don’t work like this.
Agents have an internal representation of the world, and if they are anything useful at all I think that representation will closely match our intuition about what matters and what doesn’t. An useful agent won’t give any weight to the air atoms it displaces while moving, even though it might be considered “a huge change”, because it doesn’t actually affect it’s utility. But if it considers human are an important part of the world, so important that it may need to kill us to attain it’s goals, then it’s going to have a meaningful world-state representation giving a lot of weight to humans, and that gives us an useful impact measure for free.