Reasons for thinking that later TAI would be better:
General human progress, e.g. increased wealth, wealthier people take fewer risks (aged populations also take fewer risks)
Specific human progress, e.g. on technical alignment (though the bottleneck may be implementation, much current work is specific to a paradigm), and human intelligence augmentation
Current time of unusually high geopolitical tension, in a decade PRC is going to be the clear hegemon
Reasons for thinking that sooner TAI would be better:
AI safety community has an unusually strong influence at the moment, and decided to deploy most of that influence now (more influence in the anglosphere, lab leaders have heard of AI safety ideas/arguments); it might lose that kind of influence and mindshare
Current paradigm is likely unusually safe (LLMs starting with world-knowledge, non-agentic at first, visible thoughts), later paradigms plausibly much worse65%
PRC being the hegemon would be bad because of risks from authoritarianism
Hardware overhangs less likely, leading to a more continuous development
Another consideration is takeoff speeds: TAI happening earlier would mean further progress is more bottlenecked by compute and thus takeoff is slowed down. A slower takeoff enables more time for humans to inform their decisions (but might also make things harder in other ways).
Reasons for thinking that later TAI would be better:
General human progress, e.g. increased wealth, wealthier people take fewer risks (aged populations also take fewer risks)
Specific human progress, e.g. on technical alignment (though the bottleneck may be implementation, much current work is specific to a paradigm), and human intelligence augmentation
Current time of unusually high geopolitical tension, in a decade PRC is going to be the clear hegemon
Reasons for thinking that sooner TAI would be better:
AI safety community has an unusually strong influence at the moment, and decided to deploy most of that influence now (more influence in the anglosphere, lab leaders have heard of AI safety ideas/arguments); it might lose that kind of influence and mindshare
Current paradigm is likely unusually safe (LLMs starting with world-knowledge, non-agentic at first, visible thoughts), later paradigms plausibly much worse65%
PRC being the hegemon would be bad because of risks from authoritarianism
Hardware overhangs less likely, leading to a more continuous development
Another consideration is takeoff speeds: TAI happening earlier would mean further progress is more bottlenecked by compute and thus takeoff is slowed down. A slower takeoff enables more time for humans to inform their decisions (but might also make things harder in other ways).
Thanks, added.