ASI turns out to take longer than you might think; it doesn’t arrive until 2037 or so. So far, this is the part of the scenario that’s gotten the most pushback.
Uhhh, yeah. 10 years between highly profitable and capitlized upon AGI, with lots of hardware and compute put towards it, and geopolitical and economic reasons for racing...
I can’t fathom it. I don’t see what barrier at near-human-intelligence is holding back futher advancement.
I’m quite confident that if we had the ability to scale up arbitrary portions of a human brain (e.g. math area and it’s most closely associated parietal and pre-frontal cortex areas), we’d create a smarter human than had ever before existed basically overnight. Why wouldn’t this be the case for a human-equivalent AGI system? Bandwidth bottlenecks? Nearly no returns to further scaling for some arbitrary reason?
Seems like you should prioritize making a post about how this could be a non-trivial possibility, because I just feel confused at the concept.
Oh, it very possibly is the wrongest part of the piece! I put it in the original workshop draft as I was running out of time and wanted to provoke debate.
A brief gesture at a sketch of the intuition: imagine a different, crueler world, where there were orders of magnitude more nation-states, but at the start only a few nuclear powers, like in our world, with a 1950s-level tech base. If the few nuclear powers want to keep control, they’ll have to divert huge chunks of their breeder reactors’ output to pre-emptively nuking any site in the many many non-nuclear-club states that could be an arms program to prevent breakouts, then any of the nuclear powers would have to wait a fairly long time to assemble an arms stockpile sufficient to launch a Project Orion into space.
Interesting! You should definitely think more about this and write it up sometime, either you’ll change your mind about timelines till superintelligence or you’ll have found an interesting novel argument that may change other people’s minds (such as mine).
Uhhh, yeah. 10 years between highly profitable and capitlized upon AGI, with lots of hardware and compute put towards it, and geopolitical and economic reasons for racing...
I can’t fathom it. I don’t see what barrier at near-human-intelligence is holding back futher advancement.
I’m quite confident that if we had the ability to scale up arbitrary portions of a human brain (e.g. math area and it’s most closely associated parietal and pre-frontal cortex areas), we’d create a smarter human than had ever before existed basically overnight. Why wouldn’t this be the case for a human-equivalent AGI system? Bandwidth bottlenecks? Nearly no returns to further scaling for some arbitrary reason?
Seems like you should prioritize making a post about how this could be a non-trivial possibility, because I just feel confused at the concept.
Oh, it very possibly is the wrongest part of the piece! I put it in the original workshop draft as I was running out of time and wanted to provoke debate.
A brief gesture at a sketch of the intuition: imagine a different, crueler world, where there were orders of magnitude more nation-states, but at the start only a few nuclear powers, like in our world, with a 1950s-level tech base. If the few nuclear powers want to keep control, they’ll have to divert huge chunks of their breeder reactors’ output to pre-emptively nuking any site in the many many non-nuclear-club states that could be an arms program to prevent breakouts, then any of the nuclear powers would have to wait a fairly long time to assemble an arms stockpile sufficient to launch a Project Orion into space.
Interesting! You should definitely think more about this and write it up sometime, either you’ll change your mind about timelines till superintelligence or you’ll have found an interesting novel argument that may change other people’s minds (such as mine).