Regularize by a function other than KL divergence. For heavy-tailed error distributions, KL divergence doesn’t work, but capping the maximum odds ratio for any action (similar to quantilizers) still results in positive utility.
A recent paper from UC Berkeley named Preventing Reward Hacking with Occupancy Measure Regularization proposes replacing KL divergence regularization with occupancy measure (OM) regularization. OM regularization involves regularizing based on the state or state-action distribution rather than the the action distribution:
“Our insight is that when reward hacking, the agent visits drastically different states from those reached by the safe policy, causing large deviations in state occupancy measure (OM). Thus, we propose regularizing based on the OM divergence between policies instead of AD [action distribution] divergence to prevent reward hacking”
The idea is that regularizing to minimize changes in the action distribution isn’t always safe because small changes in the action distribution can cause large changes in the states visited by the agent:
Suppose we have access to a safe policy that drives slowly and avoids falling off the cliff. However, the car is optimizing a proxy reward function that prioritizes quickly reaching the destination, but not necessarily staying on the road. If we try to regularize the car’s action distributions to the safe policy, we will need to apply heavy regularization, since only slightly increasing the probability of some unsafe action (e.g., making a sharp right turn) can lead to disaster.
...
Our proposal follows naturally from this observation: to avoid reward hacking, regularize based on divergence from the safe policy’s occupancy measure, rather than action distribution. A policy’s occupancy measure (OM) is the distribution of states or state-action pairs seen by a policy when it interacts with its environment.
I think that paper and this one are complementary. Regularizing on the state-action distribution fixes problems with the action distribution, but if it’s still using KL divergence you still get the problems in this paper. The latest version on arxiv mentions this briefly.
A recent paper from UC Berkeley named Preventing Reward Hacking with Occupancy Measure Regularization proposes replacing KL divergence regularization with occupancy measure (OM) regularization. OM regularization involves regularizing based on the state or state-action distribution rather than the the action distribution:
The idea is that regularizing to minimize changes in the action distribution isn’t always safe because small changes in the action distribution can cause large changes in the states visited by the agent:
I think that paper and this one are complementary. Regularizing on the state-action distribution fixes problems with the action distribution, but if it’s still using KL divergence you still get the problems in this paper. The latest version on arxiv mentions this briefly.