Great post, good to hear these ideas. Since I studied AI risk more this year I have come to believe that a slow takeoff is more likely and there needs to be more discussion about how to handle such a situation.
A few points: You seem to treat hardware progress as fixed and steady—the opposite could be the case, to me that seems a lot easier to coordinate hardware factories than software. Chip manufacture depends on a massive very fragile supply chain across multiple countries. It wouldn’t take many players to slow things down for everyone.
You can certainly make a case that short timelines are safer as there is less AI integration in society, no robots in every home, military etc.
Slow takeoff is still very fast for society to adapt and there is no guarantee it can adapt, especially the higher in capabilities the end time is. If its sufficiently high then I don’t think society can adapt at all no matter what. WBE, neural lace/Neuralink and similar are needed and quickly even in a “slow” scenario.
I think time in takeoff is 10* more valuable for alignment research than time beforehand, it is becoming clear that some of the earlier alignment assumptions were not correct, I expect that to continue as we know more. Before GPT 3.5 it seemed the alignment field was essentially stuck and was making little to no progress.
Great post, good to hear these ideas. Since I studied AI risk more this year I have come to believe that a slow takeoff is more likely and there needs to be more discussion about how to handle such a situation.
A few points:
You seem to treat hardware progress as fixed and steady—the opposite could be the case,
to me that seems a lot easier to coordinate hardware factories than software. Chip manufacture depends on a massive very fragile supply chain across multiple countries. It wouldn’t take many players to slow things down for everyone.
You can certainly make a case that short timelines are safer as there is less AI integration in society, no robots in every home, military etc.
Slow takeoff is still very fast for society to adapt and there is no guarantee it can adapt, especially the higher in capabilities the end time is. If its sufficiently high then I don’t think society can adapt at all no matter what. WBE, neural lace/Neuralink and similar are needed and quickly even in a “slow” scenario.
I think time in takeoff is 10* more valuable for alignment research than time beforehand, it is becoming clear that some of the earlier alignment assumptions were not correct, I expect that to continue as we know more. Before GPT 3.5 it seemed the alignment field was essentially stuck and was making little to no progress.