Huge if true! Faithful Chain of Thought may be a key factor in whether the promise of LLMs as ideal for alignment pays off, or not.
I am increasingly concerned that OpenAI isn’t showing us o1s CoT because it’s using lots of jargon that’s heading toward a private language. I hope it’s merely that it didn’t want to show its unaligned “thoughts”, and to prevent competitors from training on its useful chains of thought.
IMO, I think that most of the reason why they are not releasing CoT for o1 is exactly because of PR/competitive reasons, or this reason in a nutshell:
I hope it’s merely that it didn’t want to show its unaligned “thoughts”, and to prevent competitors from training on its useful chains of thought.
Huge if true! Faithful Chain of Thought may be a key factor in whether the promise of LLMs as ideal for alignment pays off, or not.
I am increasingly concerned that OpenAI isn’t showing us o1s CoT because it’s using lots of jargon that’s heading toward a private language. I hope it’s merely that it didn’t want to show its unaligned “thoughts”, and to prevent competitors from training on its useful chains of thought.
IMO, I think that most of the reason why they are not releasing CoT for o1 is exactly because of PR/competitive reasons, or this reason in a nutshell: