Ah right. I listened to that podcast but didn’t catch the significance of this proposal for improving language model agent alignment. Roger Dearnaley did heavily credit that paper in his post.
Ah right. I listened to that podcast but didn’t catch the significance of this proposal for improving language model agent alignment. Roger Dearnaley did heavily credit that paper in his post.