If Thurston is right here and mathematicians want to understand why some theorem is true (rather than to just know the truth values of various conjectures), and if we “feel the AGI” … then it seems future “mathematics” will consist in “mathematicians” asking future ChatGPT to explain math to them. Whether something is true, and why. There would be no research anymore.
The interesting question is, I think, whether less-than-fully-general systems, like reasoning LLMs, could outperform humans in mathematical research. Or whether this would require a full AGI that is also smarter than mathematicians. Because if we had the latter, it would likely be an ASI that is better than humans in almost everything, not just mathematics.
He actually cites reflective equilibrium here: