I’m annoyed when people attempt to analyze how the future might get weird by looking at how powerful AI agents might influence human society while neglecting how powerful AI agents might influence the Universe.
There is a physical world out there. It is really big. The biosphere of the Earth is really small comparatively. Look up, look down, now back at me. See those icy rocks floating around up there? That bright ball of gas undergoing fusion? See that mineral rich sea floor and planetary mass below us? Those are raw materials, which can be turned into energy, tools and compute.
Wondering about things like human unemployment—while the technology you are describing un-employing those humans is also sufficient to have started a self-replicating industrial base in the asteroid belt—is myopic.
If there is a self-sustaining AI agent ecosystem outside your power to control or stop, you’ve likely already lost the reins of the future. If you’ve managed to get AI to behave well on Earth and everyone is happily supported by UBI, but some rogue AIs are building an antimatter-powered laser in the Oort Cloud to boost solar-sail-powered von Neumann probes to nearby solar systems.… you lost. Game over.
This does NOT require an AI to spontaneously go rogue, or initiate escaping human control. It’s sufficient for an unwise human to have control over an AI capable of initiating the process, and saying ‘Go’. If AI agents become widely available, sooner or later some human will do something like this. Human populations have crazy outliers no matter how sane the median individual is.
If you want to talk about human-scale normal-sounding stuff 20 years in the future… you need to route your imagined future through some intense global government control of AI. Probably involving a world government with unprecedented levels of surveillance.
Some scattered thoughts which all connect back to this idea of avoiding myopic thinking about the future.
Don’t over-anchor on a specific idea. The world is a big place and a whole lot of different things can be going on all at once. Intelligence unlocks new affordances for affecting the Universe. Think in terms of what is possible given physical constraints, and using lots of reasoning of the type: “and separately, there is a chance that X might be happening”. Everything everywhere, all at once.
Massive underground/undersea industrial plants powered by nuclear energy and/or geothermal and/or fossil fuels.
Weird potent self-replicating synthetic biology in the sea / desert / Antarctic / Earth’s crust / asteroid belt / moons (e.g. Enceladus, Europa). A few examples.
Nano/bio tech, or hybrids thereof, doing strange things. Mysterious illnesses with mind-altering effects. Crop failures. Or crops sneakily producing traces of mysterious drugs.
Weirdcults acting at behest of super-persuasive AIs.
Destabilization of military and economic power through potent new autonomous weapons and industry. Super high-altitude long duration (months) autonomous drone flights for spying and missile defense. Insect sized robots infiltrating military installations, sabotaging nuclear weapons or launching them.
Sophisticated personalized spear-phishing deception/persuasion attacks on huge numbers of people. The waterline for being important enough to target dropping very quickly.
Cyberattacks, stealing information or planting fake information. Influenced elections. Deep fakes. Coordinated propaganda action, more skillful than ever before, at massive scale. Insider trading, money laundering, and currency manipulation by AI agents that simply disappear when investigated.
Some other ideas for what could well happen in the 21st century assuming superhuman AIs come in either 2030 or 2040:
Formal methods actually working to protect codebases IRL like full behavioral specifications of software, alongside interventions to make code and cryptography secure.
While I agree with Andrew Dickson that the Guaranteed Safe AI agenda mostly can’t work due to several limitations, I do think that something like it’s ideal for math/formal proof systems like Lean and software actually does work with AGI/ASI, and has already happened before, cf these 2 examples:
This is a big reason why I believe cyber-defense will just totally win eventually over cyber-offense, and in domains like banks, the national government secrets, where security is a must-have, will be perfectly secure even earlier.
Uploading human brains in the 21st century. It’s turning out that while the brain is complicated and uses a lot of compute for inference, it’s not too complicated, and it’s slowly being somewhat understood, and at any rate would be greatly sped up by AIs. My median timeframe to uploading human brains reliably is from 2040-2060, or several decades from now.
Reversible computing being practical. Right now, progress in compute hardware is slowly ending, and the 2030s is when I think non-reversible computers will stop progressing.
Anything that allows us to go further than the Landauer Limit will by necessity be reversible, and the most important thing here is a practical way to get a computer that works reversibly that is actually robust.
Speaking of that, superconductors might be invented by AI, and if we could make them practical, would let us eliminate power transfer losses due to resistance, which is huge as I believe all the costs for moving energy are due to materials having resistance, and superconductors by definition has 0 resistance at all, meaning 0 power loss to resistance.
This is especially useful for computing hardware.
But these are my picks for tech invented by AIs in the 21st century.
As someone who focuses on concerns like human unemployment, I have a few reasons:
I expect AI alignment and control to be solved by default, enough so that I can use it as a premise when thinking about future AIs.
I expect the political problems like mass human unemployment to plausibly be a bit tricky to solve. IMO, the sooner aligned superhuman intelligence is in the government, the better we can make our politics.
I expect aligned AIs and humans to go out into the stars reasonably soon such that not almost all of the future control is lost, and depending on the physics involved, even a single star or galaxy might be enough to let us control our future entirely.
Conditional on at least 1 aligned, superhumanly intelligent AI, I expect existential risk to drop fairly dramatically, and in particular I think the vulnerabilities that would make us vulnerable to rogue ASI can be fixed by aligned ASI.
I agree that aligned ASI fixes a lot of the vulnerabilities. I’m trying to focus on how humanity can survive the dangerous time between now and then. In particular, I think the danger peaks right before going away. The period where AI as a tool and/or independent agent gets stronger and stronger, but the world is not yet under the guardianship of an aligned ASI. That’s the bottleneck we need to navigate.
you’ve likely already lost the reins of the future
“Having the reins of the future” is a non-goal.
Also, you don’t “have the reins of the future” now, never have had, and almost certainly never will.
some rogue AIs are building an antimatter-powered laser in the Oort Cloud to boost solar-sail-powered von Neumann probes to nearby solar systems.… you lost.
It’s true that I’d probably count anybody building a giant self-replicating swarm to try to tile the entire light cone with any particular thing as a loss. Nonetheless, there are lots of people who seem to want to do that, and I’m not seeing how their doing it is any better than some AI doing it.
I’m annoyed when people attempt to analyze how the future might get weird by looking at how powerful AI agents might influence human society while neglecting how powerful AI agents might influence the Universe.
There is a physical world out there. It is really big. The biosphere of the Earth is really small comparatively. Look up, look down, now back at me. See those icy rocks floating around up there? That bright ball of gas undergoing fusion? See that mineral rich sea floor and planetary mass below us? Those are raw materials, which can be turned into energy, tools and compute.
Wondering about things like human unemployment—while the technology you are describing un-employing those humans is also sufficient to have started a self-replicating industrial base in the asteroid belt—is myopic.
If there is a self-sustaining AI agent ecosystem outside your power to control or stop, you’ve likely already lost the reins of the future. If you’ve managed to get AI to behave well on Earth and everyone is happily supported by UBI, but some rogue AIs are building an antimatter-powered laser in the Oort Cloud to boost solar-sail-powered von Neumann probes to nearby solar systems.… you lost. Game over.
This does NOT require an AI to spontaneously go rogue, or initiate escaping human control. It’s sufficient for an unwise human to have control over an AI capable of initiating the process, and saying ‘Go’. If AI agents become widely available, sooner or later some human will do something like this. Human populations have crazy outliers no matter how sane the median individual is.
If you want to talk about human-scale normal-sounding stuff 20 years in the future… you need to route your imagined future through some intense global government control of AI. Probably involving a world government with unprecedented levels of surveillance.
Some scattered thoughts which all connect back to this idea of avoiding myopic thinking about the future.
Don’t over-anchor on a specific idea. The world is a big place and a whole lot of different things can be going on all at once. Intelligence unlocks new affordances for affecting the Universe. Think in terms of what is possible given physical constraints, and using lots of reasoning of the type: “and separately, there is a chance that X might be happening”. Everything everywhere, all at once.
Self-replicating industry.
Melting Antarctic Ice. (perhaps using the massive oil reserves beneath it).
Massive underground/undersea industrial plants powered by nuclear energy and/or geothermal and/or fossil fuels.
Weird potent self-replicating synthetic biology in the sea / desert / Antarctic / Earth’s crust / asteroid belt / moons (e.g. Enceladus, Europa). A few examples.
Nano/bio tech, or hybrids thereof, doing strange things. Mysterious illnesses with mind-altering effects. Crop failures. Or crops sneakily producing traces of mysterious drugs.
Brain-computer-interfaces being used to control people / animals. Using them as robotic actuators and/or sources of compute.
Weird cults acting at behest of super-persuasive AIs.
Destabilization of military and economic power through potent new autonomous weapons and industry. Super high-altitude long duration (months) autonomous drone flights for spying and missile defense. Insect sized robots infiltrating military installations, sabotaging nuclear weapons or launching them.
Sophisticated personalized spear-phishing deception/persuasion attacks on huge numbers of people. The waterline for being important enough to target dropping very quickly.
Cyberattacks, stealing information or planting fake information. Influenced elections. Deep fakes. Coordinated propaganda action, more skillful than ever before, at massive scale. Insider trading, money laundering, and currency manipulation by AI agents that simply disappear when investigated.
More. More than you or I can imagine. A big wide world of strange things happening that we can’t anticipate, outside our OODA loops.
Some other ideas for what could well happen in the 21st century assuming superhuman AIs come in either 2030 or 2040:
Formal methods actually working to protect codebases IRL like full behavioral specifications of software, alongside interventions to make code and cryptography secure.
While I agree with Andrew Dickson that the Guaranteed Safe AI agenda mostly can’t work due to several limitations, I do think that something like it’s ideal for math/formal proof systems like Lean and software actually does work with AGI/ASI, and has already happened before, cf these 2 examples:
https://www.quantamagazine.org/how-the-evercrypt-library-creates-hacker-proof-cryptography-20190402/
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
This is a big reason why I believe cyber-defense will just totally win eventually over cyber-offense, and in domains like banks, the national government secrets, where security is a must-have, will be perfectly secure even earlier.
https://www.lesswrong.com/posts/B2bg677TaS4cmDPzL/limitations-on-formal-verification-for-ai-safety
Uploading human brains in the 21st century. It’s turning out that while the brain is complicated and uses a lot of compute for inference, it’s not too complicated, and it’s slowly being somewhat understood, and at any rate would be greatly sped up by AIs. My median timeframe to uploading human brains reliably is from 2040-2060, or several decades from now.
Reversible computing being practical. Right now, progress in compute hardware is slowly ending, and the 2030s is when I think non-reversible computers will stop progressing.
Anything that allows us to go further than the Landauer Limit will by necessity be reversible, and the most important thing here is a practical way to get a computer that works reversibly that is actually robust.
Speaking of that, superconductors might be invented by AI, and if we could make them practical, would let us eliminate power transfer losses due to resistance, which is huge as I believe all the costs for moving energy are due to materials having resistance, and superconductors by definition has 0 resistance at all, meaning 0 power loss to resistance.
This is especially useful for computing hardware.
But these are my picks for tech invented by AIs in the 21st century.
As someone who focuses on concerns like human unemployment, I have a few reasons:
I expect AI alignment and control to be solved by default, enough so that I can use it as a premise when thinking about future AIs.
I expect the political problems like mass human unemployment to plausibly be a bit tricky to solve. IMO, the sooner aligned superhuman intelligence is in the government, the better we can make our politics.
I expect aligned AIs and humans to go out into the stars reasonably soon such that not almost all of the future control is lost, and depending on the physics involved, even a single star or galaxy might be enough to let us control our future entirely.
Conditional on at least 1 aligned, superhumanly intelligent AI, I expect existential risk to drop fairly dramatically, and in particular I think the vulnerabilities that would make us vulnerable to rogue ASI can be fixed by aligned ASI.
I agree that aligned ASI fixes a lot of the vulnerabilities. I’m trying to focus on how humanity can survive the dangerous time between now and then. In particular, I think the danger peaks right before going away. The period where AI as a tool and/or independent agent gets stronger and stronger, but the world is not yet under the guardianship of an aligned ASI. That’s the bottleneck we need to navigate.
“Having the reins of the future” is a non-goal.
Also, you don’t “have the reins of the future” now, never have had, and almost certainly never will.
It’s true that I’d probably count anybody building a giant self-replicating swarm to try to tile the entire light cone with any particular thing as a loss. Nonetheless, there are lots of people who seem to want to do that, and I’m not seeing how their doing it is any better than some AI doing it.