Thanks, this is very helpful feedback about what was confusing. Please do ask more questions if there are still more parts that are hard to interpret.
To the first point, yes, J evaluates π on all trajectories, even off-distribution. It may do this in a Bayesian way, or a worst-case way. I claim that J does not need to “detect deceptive misalignment” in any special way, and I’m not optimistic that progress on such detection is even particularly helpful, since incompetence can also be fatal, and deceptive misalignment could Red Queen Race ahead of the detector.
Instead: a deceptively aligned policy that is bad must concretely do bad stuff on some trajectories.J can detect this by simply detecting bad stuff.
If there’s a sneaky hard part of Reward Specification beyond the obvious hard part of defining what’s good and bad, it would be “realistically defining the environment.” (That’s where purely predictive models come in.)
It is perhaps also worth pointing out that computing J is likely to be extremely hard. Step 1 is just about definingJ as a unique mathematical function; the problem of tractably computing guaranteed bounds on J falls under step 2.
My guess is that the latter will require some very fancy algorithms of the model-checking variety and some very fancy AI heuristics that plug into those algorithms in a way that can improve their performance without potentially corrupting their correctness (basically because they’re Las Vegas algorithms instead of Monte Carlo algorithms).
Thanks, computing J not being part of step 1 helps clear things up.
I do think that “realistically defining the environment” is pretty closely related to being able to detect deceptive misalignment: one way J could fail due to deception would be if its specification of the environment is good enough for most purposes, but still has some differences to the real world which allow an AI to detect the difference. Then you could have a policy that is good according to J, but which still destroys the world when actually deployed.
Similar to my comment in the other thread on deception detection, most of my optimism about alignment comes from worlds where we don’t need to define the environment in such a robust way (doing so just seems hopelessly intractable to me). Not sure whether we mostly disagree on the hardness of solving things within your decomposition, or on how good chances are we’ll be fine with less formal guarantees.
I agree, if there is a class of environment-behaviors that occur with nonnegligible probability in the real world but occur with negligible probability in the environment-model encoded in J, that would be a vulnerability in the shape of alignment plan I’m gesturing at. However, aligning a predictive model of reality to reality is “natural” compared to normative alignment. And the probability with which this vulnerability can actually be bad is linearly related to something like total variation distance between the model and reality; I don’t know if this is exactly formally correct, but I think there’s some true theorem vaguely along the lines of: a 1% TV distance could only cause a 1% chance of alignment failure via this vulnerability. We don’t have to get an astronomically perfect model of reality to have any hope of its not being exploited. Judicious use of worst-case maximin approaches (e.g. credal sets rather than pure Bayesian modeling) will also help a lot with narrowing this gap, since it will be (something like) the gap to the nearest point in the set rather than to a single distribution.
For most Js I agree, but the existence of any adversarial examples for J would be an outer alignment problem (you get what you measure). (For outer alignment, it seems necessary that there exist—and that humans discover—natural abstractions relative to formal world models that robustly pick out at least the worst stuff.)
Thanks, this is very helpful feedback about what was confusing. Please do ask more questions if there are still more parts that are hard to interpret.
To the first point, yes, J evaluates π on all trajectories, even off-distribution. It may do this in a Bayesian way, or a worst-case way. I claim that J does not need to “detect deceptive misalignment” in any special way, and I’m not optimistic that progress on such detection is even particularly helpful, since incompetence can also be fatal, and deceptive misalignment could Red Queen Race ahead of the detector.
Instead: a deceptively aligned policy that is bad must concretely do bad stuff on some trajectories.J can detect this by simply detecting bad stuff.
If there’s a sneaky hard part of Reward Specification beyond the obvious hard part of defining what’s good and bad, it would be “realistically defining the environment.” (That’s where purely predictive models come in.)
It is perhaps also worth pointing out that computing J is likely to be extremely hard. Step 1 is just about defining J as a unique mathematical function; the problem of tractably computing guaranteed bounds on J falls under step 2.
My guess is that the latter will require some very fancy algorithms of the model-checking variety and some very fancy AI heuristics that plug into those algorithms in a way that can improve their performance without potentially corrupting their correctness (basically because they’re Las Vegas algorithms instead of Monte Carlo algorithms).
Thanks, computing J not being part of step 1 helps clear things up.
I do think that “realistically defining the environment” is pretty closely related to being able to detect deceptive misalignment: one way J could fail due to deception would be if its specification of the environment is good enough for most purposes, but still has some differences to the real world which allow an AI to detect the difference. Then you could have a policy that is good according to J, but which still destroys the world when actually deployed.
Similar to my comment in the other thread on deception detection, most of my optimism about alignment comes from worlds where we don’t need to define the environment in such a robust way (doing so just seems hopelessly intractable to me). Not sure whether we mostly disagree on the hardness of solving things within your decomposition, or on how good chances are we’ll be fine with less formal guarantees.
To the final question, for what it’s worth to contextualize my perspective, I think my inside-view is simultaneously:
unusually optimistic about formal verification
unusually optimistic about learning interpretable world-models
unusually pessimistic about learning interpretable end-to-end policies
I agree, if there is a class of environment-behaviors that occur with nonnegligible probability in the real world but occur with negligible probability in the environment-model encoded in J, that would be a vulnerability in the shape of alignment plan I’m gesturing at. However, aligning a predictive model of reality to reality is “natural” compared to normative alignment. And the probability with which this vulnerability can actually be bad is linearly related to something like total variation distance between the model and reality; I don’t know if this is exactly formally correct, but I think there’s some true theorem vaguely along the lines of: a 1% TV distance could only cause a 1% chance of alignment failure via this vulnerability. We don’t have to get an astronomically perfect model of reality to have any hope of its not being exploited. Judicious use of worst-case maximin approaches (e.g. credal sets rather than pure Bayesian modeling) will also help a lot with narrowing this gap, since it will be (something like) the gap to the nearest point in the set rather than to a single distribution.
I think it extremely probable that there exist policies which exploit adversarial inputs to J such that they can do bad stuff while getting J to say “all’s fine.”
For most Js I agree, but the existence of any adversarial examples for J would be an outer alignment problem (you get what you measure). (For outer alignment, it seems necessary that there exist—and that humans discover—natural abstractions relative to formal world models that robustly pick out at least the worst stuff.)