(...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties.
- Saharon Shelah
As a true-born Dutchman I endorse Crocker’s rules.
For my most of my writing see my short-forms (new shortform, old shortform)
Twitter: @FellowHominid
Personal website: https://sites.google.com/view/afdago/home
My mainline prediction scenario for the next decades.
My mainline prediction * :
LLMs will not scale to AGI. They will not spawn evil gremlins or mesa-optimizers. BUT Scaling laws will continue to hold and future LLMs will be very impressive and make a sizable impact on the real economy and science over the next decade.
there is a single innovation left to make AGI-in-the-alex sense work, i.e. coherent, long-term planning agents (LTPA) that are effective and efficient in data sparse domains over long horizons.
that innovation will be found within the next 10-15 years
It will be clear to the general public that these are dangerous
governments will act quickly and (relativiely) decisively to bring these agents under state-control. national security concerns will dominate.
power will reside mostly with governments AI safety institutes and national security agencies. In so far as divisions of tech companies are able to create LTPAs they will be effectively nationalized.
International treaties will be made to constrain AI, outlawing the development of LTPAs by private companies. Great power competition will mean US and China will continue developing LTPAs, possibly largely boxed. Treaties will try to constrain this development with only partial succes (similar to nuclear treaties).
LLMs will continue to exist and be used by the general public
Conditional on AI ruin the closest analogy is probably something like the Cortez-Pizarro-Afonso takeovers. Unaligned AI will rely on human infrastructure and human allies for the earlier parts of takeover—but its inherent advantages in tech, coherence, decision-making and (artificial) plagues will be the deciding factor.
The world may be mildly multi-polar.
This will involve conflict between AIs.
AIs very possible may be able to cooperate in ways humans can’t.
The arrival of AGI will immediately inaugurate a scientific revolution. Sci-fi sounding progress like advanced robotics, quantum magic, nanotech, life extension, laser weapons, large space engineering, cure of many/most remaining diseases will become possible within two decades of AGI, possibly much faster.
Military power will shift to automated manufacturing of drones & weaponized artificial plagues. Drones, mostly flying will dominate the battlefield. Mass production of drones and their rapid and effective deployment in swarms will be key to victory.
Two points on which I differ with most commentators: (i) I believe AGI is a real (mostly discrete) thing , not a vibe, or a general increase of improved tools. I believe it is inherently agenctic. I don’t think spontaneous emergence of agents is impossible but I think it is more plausible agents will be built rather than grown.
(ii) I believe in general the ea/ai safety community is way overrating the importance of individual tech companies vis a vis broader trends and the power of governments. I strongly agree with Stefan Schubert’s take here on the latent hidden power of government: https://stefanschubert.substack.com/p/crises-reveal-centralisation
Consequently, the ea/ai safety community is often myopically focusing on boardroom politics that are relativily inconsequential in the grand scheme of things.
*where by mainline prediction I mean the scenario that is the mode of what I expect. This is the single likeliest scenario. However, since it contains a large number of details each of which could go differently, the probability on this specific scenario is still low.