I think the “factor of a thousand” here is mostly mathematics itself being a very unusual field. If you can reach even a tiny bit further in some “concept-space” than others, for whatever reasons internal or external, then you can publish everything you find in that margin and it will all be new. If you can’t, then you pretty much have to work on the diminishing unpublished corners of already reached concept-space and most will look derivative.
I would certainly expect AI to blow rapidly past human mathematicians at some point due to surpassing human “reach”. Whether that would also enable breakthroughs in other various sciences that rely on mathematics remains to be seen. Advances in theoretical physics may well need new abstract mathematical insights. Technology and engineering probably does not.
Sure, having just a little bit more general optimization power lets you search slightly deeper into abstract structures, opening up tons of options. Among human professions, this may be especially apparent in mathematics. But that doesn’t make it any less scary?
Like, I could have said something similar about the best vs. average programmers/”hackers” instead; there’s a similarly huge range of variation there too. Perhaps that would have been a better analogy, since the very best hackers have some more obviously scary capabilities (e.g. ability to find security vulnerabilities).
It’s definitely scary. I think it is somewhat less scary in general capabilities than for mathematics (and a few closely related fields) in particular. Most of the scary things that UFAI can do will—unlike mathematics—involve feedback cycles with the real world. This includes programming (and hacking!), science research and development, stock market prediction or manipulation, and targeted persuasion.
I don’t think the first average-human level AIs for these tasks will be immediately followed by superhuman AIs. In the absence of a rapid self-improvement takeoff, I would expect a fairly steady progression through from average human capabilities (though with weird strengths and weaknesses), through increasingly rare human capability and eventually into superhuman. While ability to play chess is a terrible analogy for AGI, it did follow this sort of capability pattern. Computer chess programs were beating increasingly more skilled enthusiasts for decades before finally exceeding the top grandmaster capabilities.
In the absence of rapid AGI self improvement or a possible sudden crystallization of hardware overhang into superhuman AGI capability through software breakthrough, I don’t much fear improvement curves in AI capability blowing through the human range in an eyeblink. It’s certainly a risk, but not a large chunk of my total credence for extinction. Most of my weight is on weakly superhuman AGI being able to improve itself or successors into strongly superhuman AGI.
I think the “factor of a thousand” here is mostly mathematics itself being a very unusual field. If you can reach even a tiny bit further in some “concept-space” than others, for whatever reasons internal or external, then you can publish everything you find in that margin and it will all be new. If you can’t, then you pretty much have to work on the diminishing unpublished corners of already reached concept-space and most will look derivative.
I would certainly expect AI to blow rapidly past human mathematicians at some point due to surpassing human “reach”. Whether that would also enable breakthroughs in other various sciences that rely on mathematics remains to be seen. Advances in theoretical physics may well need new abstract mathematical insights. Technology and engineering probably does not.
Sure, having just a little bit more general optimization power lets you search slightly deeper into abstract structures, opening up tons of options. Among human professions, this may be especially apparent in mathematics. But that doesn’t make it any less scary?
Like, I could have said something similar about the best vs. average programmers/”hackers” instead; there’s a similarly huge range of variation there too. Perhaps that would have been a better analogy, since the very best hackers have some more obviously scary capabilities (e.g. ability to find security vulnerabilities).
It’s definitely scary. I think it is somewhat less scary in general capabilities than for mathematics (and a few closely related fields) in particular. Most of the scary things that UFAI can do will—unlike mathematics—involve feedback cycles with the real world. This includes programming (and hacking!), science research and development, stock market prediction or manipulation, and targeted persuasion.
I don’t think the first average-human level AIs for these tasks will be immediately followed by superhuman AIs. In the absence of a rapid self-improvement takeoff, I would expect a fairly steady progression through from average human capabilities (though with weird strengths and weaknesses), through increasingly rare human capability and eventually into superhuman. While ability to play chess is a terrible analogy for AGI, it did follow this sort of capability pattern. Computer chess programs were beating increasingly more skilled enthusiasts for decades before finally exceeding the top grandmaster capabilities.
In the absence of rapid AGI self improvement or a possible sudden crystallization of hardware overhang into superhuman AGI capability through software breakthrough, I don’t much fear improvement curves in AI capability blowing through the human range in an eyeblink. It’s certainly a risk, but not a large chunk of my total credence for extinction. Most of my weight is on weakly superhuman AGI being able to improve itself or successors into strongly superhuman AGI.