There’s a regularization problem to solve for 3.9 and 4, and it’s not obvious to me that glee will be enough to solve it (3.9 = “unintelligible CoT”).
I’m not sure how o1 works in detail, but for example, backtracking (which o1 seems to use) makes heavy use of the pretrained distribution to decide on best next moves. So, at the very least, it’s not easy to do away with the native understanding of language. While it’s true that there is some amount of data that will enable large divergences from the pretrained distribution—and I could imagine mathematical proof generation eventually reaching this point, for example—more ambitious goals inherently come with less data, and it’s not obvious to me that there will be enough data in alignment-critical applications to cause such a large divergence.
There’s an alternative version of language invention where the model invents a better language for (e.g.) maths then uses that for more ambitious projects, but that language is probably quite intelligible!
When I imagine models inventing a language my imagination is something like Shinichi Mochizuki’s Inter-universal Teichmüller theory invented for his supposed proof of abc conjecture. It is clearly something like mathematical English and you could say it is “quite intelligible” compared to “neuralese”, but at the end, it is not very intelligible.
Mathematical reasoning might be specifically conducive to language invention because our ability to automatically verify reasoning means that we can potentially get lots of training data. The reason I expect the invented language to be “intelligible” is that it is coupled (albeit with some slack) to automatic verification.
There’s a regularization problem to solve for 3.9 and 4, and it’s not obvious to me that glee will be enough to solve it (3.9 = “unintelligible CoT”).
I’m not sure how o1 works in detail, but for example, backtracking (which o1 seems to use) makes heavy use of the pretrained distribution to decide on best next moves. So, at the very least, it’s not easy to do away with the native understanding of language. While it’s true that there is some amount of data that will enable large divergences from the pretrained distribution—and I could imagine mathematical proof generation eventually reaching this point, for example—more ambitious goals inherently come with less data, and it’s not obvious to me that there will be enough data in alignment-critical applications to cause such a large divergence.
There’s an alternative version of language invention where the model invents a better language for (e.g.) maths then uses that for more ambitious projects, but that language is probably quite intelligible!
When I imagine models inventing a language my imagination is something like Shinichi Mochizuki’s Inter-universal Teichmüller theory invented for his supposed proof of abc conjecture. It is clearly something like mathematical English and you could say it is “quite intelligible” compared to “neuralese”, but at the end, it is not very intelligible.
Mathematical reasoning might be specifically conducive to language invention because our ability to automatically verify reasoning means that we can potentially get lots of training data. The reason I expect the invented language to be “intelligible” is that it is coupled (albeit with some slack) to automatic verification.