Sure. On the one hand, xkcd. On the other hand, if it works for you, that’s great and absolutely useful progress.
I’m a little worried about direct applicability to RL because the model is still not fully naturalized—actions that affect goals are neatly labeled and separated rather than being a messy subset of actions that affect the world. I guess this another one of those cases where I think the “right” answer is “sophisticated common sense,” but an ad-hoc mostly-answer would still be useful conceptual progress.
Actually, I would argue that the model is naturalized in the relevant way.
When studying reward function tampering, for instance, the agent chooses actions from a set of available actions. These actions just affect the state of the environment, and somehow result in reward or not.
As a conceptual tool, we label part of the environment the “reward function”, and part of the environment the “proper state”. This is just to distinguish between effects that we’d like the agent to use from effects that we don’t want the agent to use.
The current-RF solution doesn’t rely on this distinction, it only relies on query-access to the reward function (which you could easily give an embedded RL agent).
The neat thing is that when we look at the objective of the current-RF agent using the same conceptual labeling of parts of the state, we see exactly why it works: the causal paths from actions to reward that pass the reward function have been removed.
Sure. On the one hand, xkcd. On the other hand, if it works for you, that’s great and absolutely useful progress.
I’m a little worried about direct applicability to RL because the model is still not fully naturalized—actions that affect goals are neatly labeled and separated rather than being a messy subset of actions that affect the world. I guess this another one of those cases where I think the “right” answer is “sophisticated common sense,” but an ad-hoc mostly-answer would still be useful conceptual progress.
Actually, I would argue that the model is naturalized in the relevant way.
When studying reward function tampering, for instance, the agent chooses actions from a set of available actions. These actions just affect the state of the environment, and somehow result in reward or not.
As a conceptual tool, we label part of the environment the “reward function”, and part of the environment the “proper state”. This is just to distinguish between effects that we’d like the agent to use from effects that we don’t want the agent to use.
The current-RF solution doesn’t rely on this distinction, it only relies on query-access to the reward function (which you could easily give an embedded RL agent).
The neat thing is that when we look at the objective of the current-RF agent using the same conceptual labeling of parts of the state, we see exactly why it works: the causal paths from actions to reward that pass the reward function have been removed.