This is confusing because “reward functions” in RL and utilities in decision theory (or moral philosophy) apply to world states or outcomes, not plans
While they are usually described in the context of world states and outcomes, I don’t think there is something special about the distinction. Or to phrase it another way: an embedded agent that views itself as a part of the world can consider its own behavior as a part of world state that it can have valid preferences about.
The most direct link between traditional RL and this concept is reward shaping. Very frequently, defining a sparse and distant goal prevents effective training. To compensate for this, the reward function is modified to include incremental signals that are easier to reach. For locomotion, this might look like “reward velocities that are positive along the X axis,” while the original reward might have just been “reach position.X >= 100.”
Reward shaping can be pushed arbitrarily far. You could implement imitation in the reward function: no longer is the reward just about an outcome, but also about how that outcome comes about. (Or to phrase it the other way again- the how becomes an outcome itself.)
In the limit, the reward function can be made extremely dense such that every possible output is associated with informative reward shaping. You can specify a reward function that, when sampled with traditional RL, reconstructs gradients similar to that of predictive loss. I’m trying to get at the idea that there isn’t a fundamental difference in kind.
A big part of what I’m trying to do with these posts is to connect predictors/simulators to existing frameworks (like utility and reward). If one of these other frameworks (which tend to have a lot of strength where they apply) suggested something bad about predictors with respect to safety efforts, it would be important to know.
While they are usually described in the context of world states and outcomes, I don’t think there is something special about the distinction. Or to phrase it another way: an embedded agent that views itself as a part of the world can consider its own behavior as a part of world state that it can have valid preferences about.
The most direct link between traditional RL and this concept is reward shaping. Very frequently, defining a sparse and distant goal prevents effective training. To compensate for this, the reward function is modified to include incremental signals that are easier to reach. For locomotion, this might look like “reward velocities that are positive along the X axis,” while the original reward might have just been “reach position.X >= 100.”
Reward shaping can be pushed arbitrarily far. You could implement imitation in the reward function: no longer is the reward just about an outcome, but also about how that outcome comes about. (Or to phrase it the other way again- the how becomes an outcome itself.)
In the limit, the reward function can be made extremely dense such that every possible output is associated with informative reward shaping. You can specify a reward function that, when sampled with traditional RL, reconstructs gradients similar to that of predictive loss. I’m trying to get at the idea that there isn’t a fundamental difference in kind.
A big part of what I’m trying to do with these posts is to connect predictors/simulators to existing frameworks (like utility and reward). If one of these other frameworks (which tend to have a lot of strength where they apply) suggested something bad about predictors with respect to safety efforts, it would be important to know.