Max, many of these problems have stayed stagnant longer than your remaining life expectancy already.
If you calibrate yourself to “the future will be similar to the past ”, which is the baseline case without agi and assume, for the sake of argument, that no current AI really gets used—the government bans it all before it becomes strong enough to be more than a toy.
Invert the view the other way. What could you do on these problem with a human level AI? With a moderate superintelligence? You can solve all the problems listed with a human level AGI, except the medical ones where you require superintelligence at some level.
People complain about my posts being too long, so I won’t make an exhaustive list, but current AI models have difficulty with robotics. It is a hard problem. You likely need several more oom of compute to solve robotics and to run your solution in real time.
If you can solve robotics—AI models can control various forms of robot can do nearly all manufacturing and maintenance tasks for almost all industry on earth—many of the problems trivialize.
Energy, climate change, space travel are trivial.
Georgism and land use isn’t really solved but for areas that do allow prebuilt robotic modular buildings, they could transform themselves at incredible rates. Essentially from an empty field to a new Hong Kong in a year. Buildings could be replaced in a month. This is because everything would be made out of large modules that can fold up and fit on a truck, and robots guided by AGI do all steps. A lot of maintenance would be done by module swaps. Robot comes and grabs the entire bathroom and pulls it off the side of the building instead of fixing the plumbing fault.
The reason you need superintelligence for medicine is that it is a complex system with too many variables for living human brains to consider them all. And you need to give every person on earth, or at least first world residents, the benefit of a medical super-expert if you want them to live as long as possible. Human medical expert decision making does not scale. Current AI is not strong enough, and to bypass regulatory barriers medical ai needs to be overwhelming and obviously superhuman in order to convince regulators.
Just to add a detail paragraph: the simple reason is an older person, even if anti aging medicine worked partially, is an unstable system. It’s like balancing on a ball bearing. Any small disturbance destabilizes the system, the body failed to compensate from degraded components, and death eventually. You would need someone to be monitored 24⁄7, with an implant able to emit drugs—including new ones invented on demand to handle genetic variants—to keep them stable past a certain age. There are thousands of variables and they interrelate in complex ways a single human cannot learn before they themselves are dead of aging.
So no, a more calibrated view of the future is that if humans do manage to coordinate and ban AI without defecting, the next 60-100 years of your life, Max—and that’s all you will see, I would put 999:1 odds on it—the outcome is going to be like the last 60 years, but with more regulations and an older population.
This means that to ban superintelligence from ever existing is choosing death. For yourself and probably every human being alive right now, because to solve these problems will likely require generations of trial and error, as aging mechanisms get missed and the human patients die anyway, and only gradually lifespans extend. If you think you can coordinate forever, then theoretically 10^50 humans could one day exist, vs the small cost of 10^11* humans dead now (10 generations of trial and error before correct life extension) but I will leave the problems with this for a post.
*Approximately 80 billion people, or slightly less than all humans who have ever lived. Lesser pauses, like a 6 month delay, are megadeaths instead of gigadeath.
Max, many of these problems have stayed stagnant longer than your remaining life expectancy already.
If you calibrate yourself to “the future will be similar to the past ”, which is the baseline case without agi and assume, for the sake of argument, that no current AI really gets used—the government bans it all before it becomes strong enough to be more than a toy.
Invert the view the other way. What could you do on these problem with a human level AI? With a moderate superintelligence? You can solve all the problems listed with a human level AGI, except the medical ones where you require superintelligence at some level.
People complain about my posts being too long, so I won’t make an exhaustive list, but current AI models have difficulty with robotics. It is a hard problem. You likely need several more oom of compute to solve robotics and to run your solution in real time.
If you can solve robotics—AI models can control various forms of robot can do nearly all manufacturing and maintenance tasks for almost all industry on earth—many of the problems trivialize.
Energy, climate change, space travel are trivial.
Georgism and land use isn’t really solved but for areas that do allow prebuilt robotic modular buildings, they could transform themselves at incredible rates. Essentially from an empty field to a new Hong Kong in a year. Buildings could be replaced in a month. This is because everything would be made out of large modules that can fold up and fit on a truck, and robots guided by AGI do all steps. A lot of maintenance would be done by module swaps. Robot comes and grabs the entire bathroom and pulls it off the side of the building instead of fixing the plumbing fault.
The reason you need superintelligence for medicine is that it is a complex system with too many variables for living human brains to consider them all. And you need to give every person on earth, or at least first world residents, the benefit of a medical super-expert if you want them to live as long as possible. Human medical expert decision making does not scale. Current AI is not strong enough, and to bypass regulatory barriers medical ai needs to be overwhelming and obviously superhuman in order to convince regulators.
Just to add a detail paragraph: the simple reason is an older person, even if anti aging medicine worked partially, is an unstable system. It’s like balancing on a ball bearing. Any small disturbance destabilizes the system, the body failed to compensate from degraded components, and death eventually. You would need someone to be monitored 24⁄7, with an implant able to emit drugs—including new ones invented on demand to handle genetic variants—to keep them stable past a certain age. There are thousands of variables and they interrelate in complex ways a single human cannot learn before they themselves are dead of aging.
So no, a more calibrated view of the future is that if humans do manage to coordinate and ban AI without defecting, the next 60-100 years of your life, Max—and that’s all you will see, I would put 999:1 odds on it—the outcome is going to be like the last 60 years, but with more regulations and an older population.
This means that to ban superintelligence from ever existing is choosing death. For yourself and probably every human being alive right now, because to solve these problems will likely require generations of trial and error, as aging mechanisms get missed and the human patients die anyway, and only gradually lifespans extend. If you think you can coordinate forever, then theoretically 10^50 humans could one day exist, vs the small cost of 10^11* humans dead now (10 generations of trial and error before correct life extension) but I will leave the problems with this for a post.
*Approximately 80 billion people, or slightly less than all humans who have ever lived. Lesser pauses, like a 6 month delay, are megadeaths instead of gigadeath.