So in 2028, AGI might be well on the way to ASI (unclear how long this takes). Certainly I would expect it to be able to do any task on a computer faster and better than any human short of a nobel-prize-level expert in that specific task. I expect very fast and agile control of humanoid robots to be possible, and that humanoid robots will be capable of doing anything a human of average strength and dexterity can do. I don’t know how expensive and rare such robots will be, probably both quite expensive and quite rare at that time. They will probably be on a trend towards getting rapidly cheaper though, as their obvious utility for a broad range of purposes will make it make sense for investors to put money into scaling up the manufacture of such robots.
There are already narrow AIs capable of superhuman drone control, including faster-than-human reaction times. I expect this to continue to advance, becoming more robust and faster and cheaper. I believe this tech will also expand to allowing such drones to target and fire weapons systems at a superhuman level. This is already demonstrated to be possible to some extent, but I’ve not yet seen demonstrations of superhuman accuracy and reaction time.
I expect that AI will be capable of controlling lab robots or instructing unskilled humans of average dexterity in manipulating lab equipment to accomplish biology lab tasks. I expect that some combination of AIs (e.g. a general AI trained in tool use and equipped with narrow biology-specific AI like AlphaFold) will be available in 2028 that, when assembled into a system, will be capable of coming up with novel lab protocols to accomplish a wide range of goals. I expect that such a BioAI system will be capable of designing a protocol to use commonly available non-governmentally-restricted materials and common lab equipment to assemble a bioweapon. I believe the upper bound on the danger of potential bioweapons to be greatly expanded from what is currently known by then, in part due to the advances in biological design tools and in part due to public advances in the science of genetics. Therefore, I expect it to be possible for an AI to design and guide the creation of a bioweapon capable of wiping out 99%+ of humanity. I’m not saying I expect this to happen, just that I expect the technology available in 2028 to make this possible.
I expect that AI agency and scaffolding for general models will continue to advance. I expect that many-step tasks will be able to be accomplished reliably. I believe that AI agents of 2028 will be able to act without supervision to successfully pursue long-range high-level goals like: ‘make money however you can, launder it through extensive crypto trades, and then deposit it in the specified bitcoin wallet’.
I expect that open source models will continue to advance, and also that training and fine-tuning will continue to get cheaper and easier. I expect open-source models will still not have ‘sticky value alignment’, meaning that whoever controls an open-source model will be able to shape its behavior however they like. I don’t expect this to result in perfect intent-alignment, but I do expect it will be ‘pretty good intent alignment’, such that the resulting AI agents will be able to be usefully deployed in low-supervision scenarios to enact many-step tasks in pursuit of high-level goals. I expect a typical home computer will not be capable not of training an open-source model from scratch in a reasonable amount of time. I do expect that a typical home computer will be capable of fine-tuning a pre-trained open source model capable of being used as part of an agent system.
I expect that the leading AI labs will have much more powerful AIs and agents than the open source ones, and lots more of them (due to training continuing to be much more expensive than inference). I expect that inference for medium-capability models will continue to get faster and cheaper. I expect the leading labs to mostly manage to maintain pretty good control over their AIs and the AI actions taken through APIs. I expect that the more likely source of harms will be from uncontrolled AIs will be from the relatively less-powerful open source AIs getting out of hand, or being used in deliberately harmful ways.
I think there’s a possibility that large AI labs will shift to not allowing even API access to their best models, because their best models will be too expensive to run, and also would be too helpful to competitors seeking to improve their own algorithms and models. Allowing your AI to help improve competitors AIs will, if my RSI predictions are accurate, be too risky to justify the profit and reputation gains. In such a future, the leading labs will all have internal-only models that they use for pursuing RSI to generate better future generations of AI.
I expect the best AGIs of 2028 to be good at doing scientific research and also engineering. I expect that they’ll be able to build toolkits for themselves of narrow AIs which do superhuman predictions of very specific domains. I think this will allow for the possibility of designing and deploying novel tech with unprecedentedly low amounts of testing and research time, leading to potentially bold actors having access to technology which seems strongly above current technology in some ways.
An example of such tech could be repurposing existing read/write brain implant chips to allow a dictator to make ‘slave chips’ that completely override a victim’s motivational centers to make them unwaveringly enthusiastically loyal to the dictator. Possibly also making them smarter and/or allowing them to interface with other people’s brains and/or with AI. If so, this would basically be like the Borg. Such an amalgam of enslaved humans and computers networked together could seem like an appealing way for a technologically-behind dictatorship like North Korea to compete economically and militarily with larger better-resourced nations. This sounds like a very science-fiction scenario, but in terms of costs and technological prerequisite technology it is very achievable. What is currently lacking is more things like the knowledge that this would be possible and affordable (which can be overcome with an intent-aligned ethics-unaligned AGI searching scientific papers for potentially advantageous tech), and the motivation / willingness to do this despite the unethical nature of the experimentation ( including likely low survival rate of the early subjects).
Things I’m quite unsure about but seem like possibilities to consider:
AI may accelerate research on nanotech, and thus we might see impressive molecular manufacturing unlocked by 2028.
AI might speed up fusion research, such that fusion becomes widely economically viable.
Robust super-humanly fast AI pilots may make certain military operations much cheaper and easier. Possibly this would result in making it much cheaper and easier to deploy wide-ranging missile defense systems. If so, this would upset the uneasy Mutually Assured Destruction détente that currently prevents World Powers from attacking each other. This, combined with increased tensions from AGI and the surge in technological development, could result in large scale conflicts.
Somebody may be foolish enough to unleash an AGI agent just for $@&*$ and giggles, by giving it explicit instructions to reproduce, self-improve, and seek power. Then perhaps deliberately releasing control over it, or letting it escape their control. This probably wouldn’t be game over for humanity, but it could result in a significant catastrophe if the initial AI is sufficiently capable.
My current timelines, and the opportunity to bet with or against them:
Recursive Self-Improvement (RSI) by mid 2026
AGI by late 2027, probably sooner.
So in 2028, AGI might be well on the way to ASI (unclear how long this takes). Certainly I would expect it to be able to do any task on a computer faster and better than any human short of a nobel-prize-level expert in that specific task. I expect very fast and agile control of humanoid robots to be possible, and that humanoid robots will be capable of doing anything a human of average strength and dexterity can do. I don’t know how expensive and rare such robots will be, probably both quite expensive and quite rare at that time. They will probably be on a trend towards getting rapidly cheaper though, as their obvious utility for a broad range of purposes will make it make sense for investors to put money into scaling up the manufacture of such robots.
There are already narrow AIs capable of superhuman drone control, including faster-than-human reaction times. I expect this to continue to advance, becoming more robust and faster and cheaper. I believe this tech will also expand to allowing such drones to target and fire weapons systems at a superhuman level. This is already demonstrated to be possible to some extent, but I’ve not yet seen demonstrations of superhuman accuracy and reaction time.
I expect that AI will be capable of controlling lab robots or instructing unskilled humans of average dexterity in manipulating lab equipment to accomplish biology lab tasks. I expect that some combination of AIs (e.g. a general AI trained in tool use and equipped with narrow biology-specific AI like AlphaFold) will be available in 2028 that, when assembled into a system, will be capable of coming up with novel lab protocols to accomplish a wide range of goals. I expect that such a BioAI system will be capable of designing a protocol to use commonly available non-governmentally-restricted materials and common lab equipment to assemble a bioweapon. I believe the upper bound on the danger of potential bioweapons to be greatly expanded from what is currently known by then, in part due to the advances in biological design tools and in part due to public advances in the science of genetics. Therefore, I expect it to be possible for an AI to design and guide the creation of a bioweapon capable of wiping out 99%+ of humanity. I’m not saying I expect this to happen, just that I expect the technology available in 2028 to make this possible.
I expect that AI agency and scaffolding for general models will continue to advance. I expect that many-step tasks will be able to be accomplished reliably. I believe that AI agents of 2028 will be able to act without supervision to successfully pursue long-range high-level goals like: ‘make money however you can, launder it through extensive crypto trades, and then deposit it in the specified bitcoin wallet’.
I expect that open source models will continue to advance, and also that training and fine-tuning will continue to get cheaper and easier. I expect open-source models will still not have ‘sticky value alignment’, meaning that whoever controls an open-source model will be able to shape its behavior however they like. I don’t expect this to result in perfect intent-alignment, but I do expect it will be ‘pretty good intent alignment’, such that the resulting AI agents will be able to be usefully deployed in low-supervision scenarios to enact many-step tasks in pursuit of high-level goals. I expect a typical home computer will not be capable not of training an open-source model from scratch in a reasonable amount of time. I do expect that a typical home computer will be capable of fine-tuning a pre-trained open source model capable of being used as part of an agent system.
I expect that the leading AI labs will have much more powerful AIs and agents than the open source ones, and lots more of them (due to training continuing to be much more expensive than inference). I expect that inference for medium-capability models will continue to get faster and cheaper. I expect the leading labs to mostly manage to maintain pretty good control over their AIs and the AI actions taken through APIs. I expect that the more likely source of harms will be from uncontrolled AIs will be from the relatively less-powerful open source AIs getting out of hand, or being used in deliberately harmful ways.
I think there’s a possibility that large AI labs will shift to not allowing even API access to their best models, because their best models will be too expensive to run, and also would be too helpful to competitors seeking to improve their own algorithms and models. Allowing your AI to help improve competitors AIs will, if my RSI predictions are accurate, be too risky to justify the profit and reputation gains. In such a future, the leading labs will all have internal-only models that they use for pursuing RSI to generate better future generations of AI.
I expect the best AGIs of 2028 to be good at doing scientific research and also engineering. I expect that they’ll be able to build toolkits for themselves of narrow AIs which do superhuman predictions of very specific domains. I think this will allow for the possibility of designing and deploying novel tech with unprecedentedly low amounts of testing and research time, leading to potentially bold actors having access to technology which seems strongly above current technology in some ways.
An example of such tech could be repurposing existing read/write brain implant chips to allow a dictator to make ‘slave chips’ that completely override a victim’s motivational centers to make them unwaveringly enthusiastically loyal to the dictator. Possibly also making them smarter and/or allowing them to interface with other people’s brains and/or with AI. If so, this would basically be like the Borg. Such an amalgam of enslaved humans and computers networked together could seem like an appealing way for a technologically-behind dictatorship like North Korea to compete economically and militarily with larger better-resourced nations. This sounds like a very science-fiction scenario, but in terms of costs and technological prerequisite technology it is very achievable. What is currently lacking is more things like the knowledge that this would be possible and affordable (which can be overcome with an intent-aligned ethics-unaligned AGI searching scientific papers for potentially advantageous tech), and the motivation / willingness to do this despite the unethical nature of the experimentation ( including likely low survival rate of the early subjects).
Things I’m quite unsure about but seem like possibilities to consider:
AI may accelerate research on nanotech, and thus we might see impressive molecular manufacturing unlocked by 2028.
AI might speed up fusion research, such that fusion becomes widely economically viable.
Robust super-humanly fast AI pilots may make certain military operations much cheaper and easier. Possibly this would result in making it much cheaper and easier to deploy wide-ranging missile defense systems. If so, this would upset the uneasy Mutually Assured Destruction détente that currently prevents World Powers from attacking each other. This, combined with increased tensions from AGI and the surge in technological development, could result in large scale conflicts.
Somebody may be foolish enough to unleash an AGI agent just for $@&*$ and giggles, by giving it explicit instructions to reproduce, self-improve, and seek power. Then perhaps deliberately releasing control over it, or letting it escape their control. This probably wouldn’t be game over for humanity, but it could result in a significant catastrophe if the initial AI is sufficiently capable.