ASI turns out to take longer than you might think; it doesn’t arrive until 2037 or so. So far, this is the part of the scenario that’s gotten the most pushback.
Uhhh, yeah. 10 years between highly profitable and capitlized upon AGI, with lots of hardware and compute put towards it, and geopolitical and economic reasons for racing...
I can’t fathom it. I don’t see what barrier at near-human-intelligence is holding back futher advancement.
I’m quite confident that if we had the ability to scale up arbitrary portions of a human brain (e.g. math area and it’s most closely associated parietal and pre-frontal cortex areas), we’d create a smarter human than had ever before existed basically overnight. Why wouldn’t this be the case for a human-equivalent AGI system? Bandwidth bottlenecks? Nearly no returns to further scaling for some arbitrary reason?
Seems like you should prioritize making a post about how this could be a non-trivial possibility, because I just feel confused at the concept.
I largely agree that ASI will follow AGI faster, but with a couple caveats.
The road from AGI to superintelligence will very likely be fairly continuous. You could slap the term “superintelligence” almost wherever you want after it passes human level.
I do see some reasons that the road will go a little slower than we might think. Scaling laws are logarithmic; making more and better chips requires physical technology that the AGI can help with but can’t do until it gets better with robotics, possibly including new hardware (although humanoid robotics will be close to adequate for most things by then, with new control networks rapidly trained by the AGI).
If the architecture is similar to current LLMs, it’s enough like human thought that I expect the progression to remain logarithmic; you’re still using the same clumsy basic algorithm of using your knowledge to come up with ideas, then going through long chains of thought and ultimately experiments to test the validity of different ideas.
It’s completely dependent on what we mean by superintelligence, but creating new technologies in a day will take maybe five years after the first clearly human-level general real AGI on this path, in my rough estimate.
Of course that’s scaled by how hard people are actually trying for it.
Oh, it very possibly is the wrongest part of the piece! I put it in the original workshop draft as I was running out of time and wanted to provoke debate.
A brief gesture at a sketch of the intuition: imagine a different, crueler world, where there were orders of magnitude more nation-states, but at the start only a few nuclear powers, like in our world, with a 1950s-level tech base. If the few nuclear powers want to keep control, they’ll have to divert huge chunks of their breeder reactors’ output to pre-emptively nuking any site in the many many non-nuclear-club states that could be an arms program to prevent breakouts, then any of the nuclear powers would have to wait a fairly long time to assemble an arms stockpile sufficient to launch a Project Orion into space.
Interesting! You should definitely think more about this and write it up sometime, either you’ll change your mind about timelines till superintelligence or you’ll have found an interesting novel argument that may change other people’s minds (such as mine).
I think I’m also learning that people are way more interested in this detail than I expected!
I debated changing it to “203X” when posting to avoid this becoming the focus of the discussion but figured, “eh, keep it as I actually wrote it in the workshop” for good epistemic hygiene.
Uhhh, yeah. 10 years between highly profitable and capitlized upon AGI, with lots of hardware and compute put towards it, and geopolitical and economic reasons for racing...
I can’t fathom it. I don’t see what barrier at near-human-intelligence is holding back futher advancement.
I’m quite confident that if we had the ability to scale up arbitrary portions of a human brain (e.g. math area and it’s most closely associated parietal and pre-frontal cortex areas), we’d create a smarter human than had ever before existed basically overnight. Why wouldn’t this be the case for a human-equivalent AGI system? Bandwidth bottlenecks? Nearly no returns to further scaling for some arbitrary reason?
Seems like you should prioritize making a post about how this could be a non-trivial possibility, because I just feel confused at the concept.
I largely agree that ASI will follow AGI faster, but with a couple caveats.
The road from AGI to superintelligence will very likely be fairly continuous. You could slap the term “superintelligence” almost wherever you want after it passes human level.
I do see some reasons that the road will go a little slower than we might think. Scaling laws are logarithmic; making more and better chips requires physical technology that the AGI can help with but can’t do until it gets better with robotics, possibly including new hardware (although humanoid robotics will be close to adequate for most things by then, with new control networks rapidly trained by the AGI).
If the architecture is similar to current LLMs, it’s enough like human thought that I expect the progression to remain logarithmic; you’re still using the same clumsy basic algorithm of using your knowledge to come up with ideas, then going through long chains of thought and ultimately experiments to test the validity of different ideas.
It’s completely dependent on what we mean by superintelligence, but creating new technologies in a day will take maybe five years after the first clearly human-level general real AGI on this path, in my rough estimate.
Of course that’s scaled by how hard people are actually trying for it.
Oh, it very possibly is the wrongest part of the piece! I put it in the original workshop draft as I was running out of time and wanted to provoke debate.
A brief gesture at a sketch of the intuition: imagine a different, crueler world, where there were orders of magnitude more nation-states, but at the start only a few nuclear powers, like in our world, with a 1950s-level tech base. If the few nuclear powers want to keep control, they’ll have to divert huge chunks of their breeder reactors’ output to pre-emptively nuking any site in the many many non-nuclear-club states that could be an arms program to prevent breakouts, then any of the nuclear powers would have to wait a fairly long time to assemble an arms stockpile sufficient to launch a Project Orion into space.
Interesting! You should definitely think more about this and write it up sometime, either you’ll change your mind about timelines till superintelligence or you’ll have found an interesting novel argument that may change other people’s minds (such as mine).
I think I’m also learning that people are way more interested in this detail than I expected!
I debated changing it to “203X” when posting to avoid this becoming the focus of the discussion but figured, “eh, keep it as I actually wrote it in the workshop” for good epistemic hygiene.