I really like this layout, this idea, and the diagrams. Great work.
I don’t agree that counterfactual oracles fix the incentive. There are black boxes in that proposal, like “how is the automated system not vulnerable to manipulation” and “why do we think the system correctly formally measures the quantity in question?” (see more potential problems). I think relying only on this kind of engineering cleverness is generally dangerous, because it produces safety measures we don’t see how to break (and probably not safety measures that don’t break).
Also, on page 10 you write that during deployment, agents appear as if they are optimizing the training reward function. As evhub et al point out, this isn’t usually true: the objective recoverable from perfect IRL on a trained RL agent is often different (behavioral objective != training objective).
I really like this layout, this idea, and the diagrams. Great work.
Glad to hear it :)
I don’t agree that counterfactual oracles fix the incentive. There are black boxes in that proposal, like “how is the automated system not vulnerable to manipulation” and “why do we think the system correctly formally measures the quantity in question?” (see more potential problems). I think relying only on this kind of engineering cleverness is generally dangerous, because it produces safety measures we don’t see how to break (and probably not safety measures that don’t break).
Yes, the argument is only valid under the assumptions that you mention. Thanks for pointing to the discussion post about the assumptions.
Also, on page 10 you write that during deployment, agents appear as if they are optimizing the training reward function. As evhub et al point out, this isn’t usually true: the objective recoverable from perfect IRL on a trained RL agent is often different (behavioral objective != training objective).
Fair point, we should probably weaken this claim somewhat.
I really like this layout, this idea, and the diagrams. Great work.
I don’t agree that counterfactual oracles fix the incentive. There are black boxes in that proposal, like “how is the automated system not vulnerable to manipulation” and “why do we think the system correctly formally measures the quantity in question?” (see more potential problems). I think relying only on this kind of engineering cleverness is generally dangerous, because it produces safety measures we don’t see how to break (and probably not safety measures that don’t break).
Also, on page 10 you write that during deployment, agents appear as if they are optimizing the training reward function. As evhub et al point out, this isn’t usually true: the objective recoverable from perfect IRL on a trained RL agent is often different (behavioral objective != training objective).
Glad to hear it :)
Yes, the argument is only valid under the assumptions that you mention. Thanks for pointing to the discussion post about the assumptions.
Fair point, we should probably weaken this claim somewhat.