It sure doesn’t seem to generalize in GPT-4o case. But what’s the hypothesis for Sonnet 3.5 refusing in 85% of cases? And CoT improving score and o1 being better in browser suggests the problem is in models not understanding consequences, not in them not trying to be good. What’s the rate of capability generalization to agent environment? Are we going to conclude that Sonnet is just demonstrates reasoning, instead of doing it for real, if it solves only 85% of tasks it correctly talks about?
Also, what’s the rate of generalization of unprompted problematic behaviour avoidance? It’s much less of a problem if your AI does what you tell it to do—you can just don’t give it to users, tell it to invent nanotechnology, and win.
I think it’s fair to say that some smarter models do better at this, however, it’s still worrisome that there is a gap. Also attacks continue to transfer.
It sure doesn’t seem to generalize in GPT-4o case. But what’s the hypothesis for Sonnet 3.5 refusing in 85% of cases? And CoT improving score and o1 being better in browser suggests the problem is in models not understanding consequences, not in them not trying to be good. What’s the rate of capability generalization to agent environment? Are we going to conclude that Sonnet is just demonstrates reasoning, instead of doing it for real, if it solves only 85% of tasks it correctly talks about?
Also, what’s the rate of generalization of unprompted problematic behaviour avoidance? It’s much less of a problem if your AI does what you tell it to do—you can just don’t give it to users, tell it to invent nanotechnology, and win.
I had finishing this up on my to-do list for a while. I just made a full length post on it.
https://www.lesswrong.com/posts/ZoFxTqWRBkyanonyb/current-safety-training-techniques-do-not-fully-transfer-to
I think it’s fair to say that some smarter models do better at this, however, it’s still worrisome that there is a gap. Also attacks continue to transfer.