it doesn’t depend on goodness of HCH, and instead relies on some claims about offense-defense between teams of weak agents and strong agents
Can you point me to the original claims? While trying to find it myself, I came across https://aligned.substack.com/p/alignment-optimism which seems to be the most up to date explanation of why Jan thinks his approach will work (and which also contains his views on the obfuscated arguments problem and how RRM relates to IDA, so should be a good resource for me to read more carefully). Are you perhaps referring to the section “Evaluation is easier than generation”?
Do you have any major disagreements with what’s in Jan’s post? (It doesn’t look like you publicly commented on either Jan’s substack or his AIAF link post.)
I don’t think I disagree with many of the claims in Jan’s post, generally I think his high level points are correct.
He lists a lot of things as “reasons for optimism” that I wouldn’t consider positive updates (e.g. stuff working that I would strongly expect to work) and doesn’t list the analogous reasons for pessimism (e.g. stuff that hasn’t worked well yet). Similarly I’m not sure conviction in language models is a good thing but it may depend on your priors.
One potential substantive disagreement with Jan’s position is that I’m somewhat more scared of AI systems evaluating the consequences of each other’s actions and therefore more interested in trying to evaluate proposed actions on paper (rather than needing to run them to see what happens). That is, I’m more interested in “process-based” supervision and decoupled evaluation, whereas my understanding is that Jan sees a larger role for systems doing things like carrying out experiments with evaluation of results in the same way that we’d evaluate employee’s output.
(This is related to the difference between IDA and RRM that I mentioned above. I’m actually not sure about Jan’s all-things-considered position, and I think this piece is a bit agnostic on this question. I’ll return to this question below.)
The basic tension here is that if you evaluate proposed actions you easily lose competitiveness (since AI systems will learn things overseers don’t know about the consequences of different possible actions) whereas if you evaluate outcomes then you are more likely to have an abrupt takeover where AI systems grab control of sensors / the reward channel / their own computers (since that will lead to the highest reward). A subtle related point is that if you have a big competitiveness gap from process-based feedback, then you may also be running an elevated risk from deceptive alignment (since it indicates that your model understands things about the world that you don’t).
In practice I don’t think either of those issues (competitiveness or takeover risk) is a huge issue right now. I think process-based feedback is pretty much competitive in most domains, but the gap could grow quickly as AI systems improve (depending on how well our techniques work). On the other side, I think that takeover risks will be small in the near future, and it is very plausible that you can get huge amounts of research out of AI systems before takeover is a significant risk. That said I do think eventually that risk will become large and so we will need to turn to something else: new breakthroughs, process-based feedback, or fortunate facts about generalization.
As I mentioned, I’m actually not sure what Jan’s current take on this is, or exactly what view he is expressing in this piece. He says:
Another important open question is how much easier evaluation is if you can’t rely on feedback signals from the real world. For example, is evaluation of a piece of code easier than writing it, even if you’re not allowed to run it? If we’re worried that our AI systems are writing code that might contain trojans and sandbox-breaking code, then we can’t run it to “see what happens” before we’ve reviewed it carefully.
I’m not sure where he comes down on whether we should use feedback signals from the real world, and if so what kinds of precaution we should take to avoid takeover and how long we should expect them to hold up. I think both halves of this are just important open questions—will we need real world feedback to evaluate AI outcomes? In what cases will we be able to do so safely? If Jan is also just very unsure about both of these questions then we may be on the same page.
I generally hope that OpenAI can have strong evaluations of takeover risk (including: understanding their AI’s capabilities, whether their AI may try to take over, and their own security against takeover attempts). If so, then questions about the safety of outcomes-based feedback can probably be settled empirically and the community can take an “all of the above” approach. In this case all of the above is particularly easy since everything is sitting on the same spectrum. A realistic system is likely to involve some messy combination of outcomes-based and process-based supervision, we’ll just be adjusting dials in response to evidence about what works and what is risky.
Thanks for this helpful explanation.
Can you point me to the original claims? While trying to find it myself, I came across https://aligned.substack.com/p/alignment-optimism which seems to be the most up to date explanation of why Jan thinks his approach will work (and which also contains his views on the obfuscated arguments problem and how RRM relates to IDA, so should be a good resource for me to read more carefully). Are you perhaps referring to the section “Evaluation is easier than generation”?
Do you have any major disagreements with what’s in Jan’s post? (It doesn’t look like you publicly commented on either Jan’s substack or his AIAF link post.)
I don’t think I disagree with many of the claims in Jan’s post, generally I think his high level points are correct.
He lists a lot of things as “reasons for optimism” that I wouldn’t consider positive updates (e.g. stuff working that I would strongly expect to work) and doesn’t list the analogous reasons for pessimism (e.g. stuff that hasn’t worked well yet). Similarly I’m not sure conviction in language models is a good thing but it may depend on your priors.
One potential substantive disagreement with Jan’s position is that I’m somewhat more scared of AI systems evaluating the consequences of each other’s actions and therefore more interested in trying to evaluate proposed actions on paper (rather than needing to run them to see what happens). That is, I’m more interested in “process-based” supervision and decoupled evaluation, whereas my understanding is that Jan sees a larger role for systems doing things like carrying out experiments with evaluation of results in the same way that we’d evaluate employee’s output.
(This is related to the difference between IDA and RRM that I mentioned above. I’m actually not sure about Jan’s all-things-considered position, and I think this piece is a bit agnostic on this question. I’ll return to this question below.)
The basic tension here is that if you evaluate proposed actions you easily lose competitiveness (since AI systems will learn things overseers don’t know about the consequences of different possible actions) whereas if you evaluate outcomes then you are more likely to have an abrupt takeover where AI systems grab control of sensors / the reward channel / their own computers (since that will lead to the highest reward). A subtle related point is that if you have a big competitiveness gap from process-based feedback, then you may also be running an elevated risk from deceptive alignment (since it indicates that your model understands things about the world that you don’t).
In practice I don’t think either of those issues (competitiveness or takeover risk) is a huge issue right now. I think process-based feedback is pretty much competitive in most domains, but the gap could grow quickly as AI systems improve (depending on how well our techniques work). On the other side, I think that takeover risks will be small in the near future, and it is very plausible that you can get huge amounts of research out of AI systems before takeover is a significant risk. That said I do think eventually that risk will become large and so we will need to turn to something else: new breakthroughs, process-based feedback, or fortunate facts about generalization.
As I mentioned, I’m actually not sure what Jan’s current take on this is, or exactly what view he is expressing in this piece. He says:
I’m not sure where he comes down on whether we should use feedback signals from the real world, and if so what kinds of precaution we should take to avoid takeover and how long we should expect them to hold up. I think both halves of this are just important open questions—will we need real world feedback to evaluate AI outcomes? In what cases will we be able to do so safely? If Jan is also just very unsure about both of these questions then we may be on the same page.
I generally hope that OpenAI can have strong evaluations of takeover risk (including: understanding their AI’s capabilities, whether their AI may try to take over, and their own security against takeover attempts). If so, then questions about the safety of outcomes-based feedback can probably be settled empirically and the community can take an “all of the above” approach. In this case all of the above is particularly easy since everything is sitting on the same spectrum. A realistic system is likely to involve some messy combination of outcomes-based and process-based supervision, we’ll just be adjusting dials in response to evidence about what works and what is risky.