I doubt that anyone even remembers this, but I feel compelled to say it: there was some conversation about AI maybe 10 years ago, possibly on LessWrong, where I offered the view that abstract math might take AI a particularly long time to master compared to other things.
I don’t think I ever had a particularly good reason for that belief other than a vague sense of “math is hard for humans so maybe it’s hard for machines too”. But formally considering that prediction falsified now.
Even a year ago, I would have bet extremely high odds that data analyst-type jobs would be replaced well before postdocs in math and theoretical physics. It’s wild that the reverse is plausible now
Relative to 10 (or whatever) years ago? Sure I’ve made quite a few of those already. By this point it’d be hard to remember my past beliefs well enough to make a list of differences.
Due to o3 specifically? I’m not sure, I have difficulty telling how significant things like ARC-AGI are in practice, but the general result of “improvements in programming and math continue” doesn’t seem like a huge surprise by itself. It’s certainly an update in favor of the current paradigm continuing to scale and pay back the funding put into it, though.
Math is just a language (a very simple one, in fact). Thus, abstract math is right in the wheelhouse for something made for language. Large Language Models are called that for a reason, and abstract math doesn’t rely on the world itself, just the language of math. LLMs lack grounding, but abstract math doesn’t require it at all. It seems more surprising how badly LLMs did math, not that they made progress. (Admittedly, if you actually mean ten years ago, that’s before LLMs were really a thing. The primary mechanism that distinguishes the transformer was only barely invented then.)
I disagree with this, in that good mathematics definitely requires at least a little understanding of the world, and if I were to think about why LLMs succeeded at math, I’d probably point to the fact that it’s an unusually verifiable task, relative to the vast majority of tasks, and would also think that the fact that you can get a lot of high-quality data also helps LLMs.
Only programming shares these traits to an exceptional degree, and outside of mathematics/programming, I expect less transferability, though not effectively 0 transferability.
Math is definitely just a language. It is a combination of symbols and a grammar about how they go together. It’s what you come up with when you maximally abstract away the real world, and the part about not needing any grounding was specifically about abstract math, where there is no real world.
Verifiable is obviously important for training (since we could give effectively infinite training data), but the reason it is verifiable so easily is because it doesn’t rely on the world. Also, note that programming languages are also just that, languages (and quite simple ones) but abstract math is even less dependent on the real world than programming.
I doubt that anyone even remembers this, but I feel compelled to say it: there was some conversation about AI maybe 10 years ago, possibly on LessWrong, where I offered the view that abstract math might take AI a particularly long time to master compared to other things.
I don’t think I ever had a particularly good reason for that belief other than a vague sense of “math is hard for humans so maybe it’s hard for machines too”. But formally considering that prediction falsified now.
Even a year ago, I would have bet extremely high odds that data analyst-type jobs would be replaced well before postdocs in math and theoretical physics. It’s wild that the reverse is plausible now
Do you think there’s any other updates you should make as well?
Relative to 10 (or whatever) years ago? Sure I’ve made quite a few of those already. By this point it’d be hard to remember my past beliefs well enough to make a list of differences.
Due to o3 specifically? I’m not sure, I have difficulty telling how significant things like ARC-AGI are in practice, but the general result of “improvements in programming and math continue” doesn’t seem like a huge surprise by itself. It’s certainly an update in favor of the current paradigm continuing to scale and pay back the funding put into it, though.
Math is just a language (a very simple one, in fact). Thus, abstract math is right in the wheelhouse for something made for language. Large Language Models are called that for a reason, and abstract math doesn’t rely on the world itself, just the language of math. LLMs lack grounding, but abstract math doesn’t require it at all. It seems more surprising how badly LLMs did math, not that they made progress. (Admittedly, if you actually mean ten years ago, that’s before LLMs were really a thing. The primary mechanism that distinguishes the transformer was only barely invented then.)
I disagree with this, in that good mathematics definitely requires at least a little understanding of the world, and if I were to think about why LLMs succeeded at math, I’d probably point to the fact that it’s an unusually verifiable task, relative to the vast majority of tasks, and would also think that the fact that you can get a lot of high-quality data also helps LLMs.
Only programming shares these traits to an exceptional degree, and outside of mathematics/programming, I expect less transferability, though not effectively 0 transferability.
Math is definitely just a language. It is a combination of symbols and a grammar about how they go together. It’s what you come up with when you maximally abstract away the real world, and the part about not needing any grounding was specifically about abstract math, where there is no real world.
Verifiable is obviously important for training (since we could give effectively infinite training data), but the reason it is verifiable so easily is because it doesn’t rely on the world. Also, note that programming languages are also just that, languages (and quite simple ones) but abstract math is even less dependent on the real world than programming.
Yeah I’m not sure of the exact date but it was definitely before LLMs were a thing.