What is the justification behind the concept of a decisive strategic advantage? Why do we think that a superintelligence can do extraordinary things (hack human minds, invent nanotechnology, conquer the world, kill everyone in the same instant) when nations and corporations can’t do those things?
(Someone else asked a similar question, but I wanted to ask in my own words.)
I think the best justification is by analogy. Humans do not physically have a decisive strategic advantage over other large animals—chimps, lions, elephants, etc. And for hundreds of thousands of years, we were not at the top of the food chain, despite our intelligence. However, intelligence eventually won out, and allowed us to conquer the planet.
Moreover, the benefit of intelligence increased exponentially in proportion to the exponential advance of technology. There was a long, slow burn, followed by what (on evolutionary timescales) was an extremely “fast takeoff”: a very rapid improvement in technology (and thus power) over only a few hundred years. Technological progress is now so rapid that human minds have trouble keeping up within a single lifetime, and genetic evolution has been left in the dust.
That’s the world into which AGI will enter—a technological world in which a difference in intellectual ability can be easily translated into a difference in technological ability, and thus power. Any future technologies that the laws of physics don’t explicitly prohibit, we must assume that an AGI will master faster than we can.
Some one else already commented on how human intelligence gave us a decisive strategic advantage over our natural predators and many environmental threats. I think this cartoon is my mental shorthand for that transition. The timescale is on the order of 10k-100k years, given human intelligence starting from the ancestral environment.
Empires and nations, in turn, conquered the world by taking it away from city-states and similarly smaller entities in ~1k-10k years. The continued existence of Singapore and the Sentinel Islanders doesn’t change the fact that a modern large nation could wipe them out in a handful of years, at most, if we really wanted to. We don’t because doing so is not useful, but the power exists.
Modern corporations don’t want to control the whole world. Like Fnargl, that’s not what they’re pointed at. But it only took a few decades for Walmart to displace a huge swath of the formerly-much-more-local retail market, and even fewer decades for Amazon to repeat a similar feat online, each starting from a good set of ideas and a much smaller resource base than even the smallest nations. And while corporations are militarily weak, they have more than enough economic power to shape the laws of at least some of the nations that host them in ways that let them accumulate more power over time.
So when I look at history, I see a series of major displacements of older systems by newer ones, on faster and faster timescales, using smaller and smaller fractions of our total resource base, all driven by our accumulation of better ideas and using those ideas to accumulate wealth and power. All of this has been done with brains no smarter, natively, than what we had 10k years ago—there hasn’t been time for biological evolution to do much, there. So why should that pattern suddenly stop being true when we introduce a new kind of entity with even better ideas than the best strategies humans have ever come up with? Especially when human minds have already demonstrated a long list of physically-possible scenarios that might, if enacted, kill everyone in a short span of time, or at least disrupt us enough to buy time to mop up the survivors?
Having watched the video, I can’t say I’m convinced. I’m 50⁄50 on whether DSA is actually possible with any level of intelligence at all. If it isn’t possible, then doom isn’t likely (not impossible, but unlikely), in my view.
What would I do if I was the AI, but I had 100 copies of myself, and we had 100 years to think for every 1 second that passed in reality. And I had internet access.
Do you think you could take over the world from that opening?
Edit: And I have access to my own source code, but I only dare do things like fix my motivational problems and make sure I don’t get board during all that time, things like that.
Do you dispute that this is possible in principle or just that we won’t get AI that powerful or something else?
It seems to me that there is some level of intelligence, at which an agent is easily able out-compete the whole rest of human civilization. What exactly that level of intelligence is, is somewhat unclear (in large part because we don’t really have a principled way to measure “intelligence” in general: psychometrics describe variation in human cognitive abilities, but that doesn’t really give us a measuring stick for thinking about how “intelligent”, in general, something is).
Does that seem right to you, or should we back up and build out why that seems true to me?
It seems to me that there is some level of intelligence, at which an agent is easily able out-compete the whole rest of human civilization.
This is the statement I disagree with, in particular the word “easily”. I guess the crux of this debate is how powerful we think any level of intelligence is. There has to be some limits, in the same way that even the most wealthy people in history could not forestall their own deaths no matter how much money or medical expertise was applied.
I’m not compelled by that analogy. There are lots of things that money can’t buy, but that (sufficient) intelligence can.
There are theoretical limits to what cognition is able to do, but those are so far from the human range that they’re not really worth mentioning. The question is: “are there practical limits to what an intelligence can do, that leave even a super-intelligence uncommunicative with human civilization?”
It seems to me that as an example, you could just take a particularly impressive person (Elon Musk or John Von Neuman are popular exemplars) and ask “What if there was a nation of only people who were that capable?” It seems that if a nation of say 300,000,000 Elon Musks went to war with the United States, the United States would loose handily. Musktopia would just have a huge military-technological advantage: they would do fundamental science faster, and develop engineering innovations faster, and have better operational competence than the US, on ~ all levels. (I think this is true for a much smaller number than 300,000,000, having a number that high makes the point straightforward.)
Does that seem right to you? If not, why not?
Or alternatively, what do you make of vignettes like That Alien Message?
I don’t think a nation of Musks would win against current USA because Musk is optimised for some things (making an absurd amount of money, CEOing, twitting his shower thoughts), but an actual war requires a rather more diverse set of capacity.
Similarly, I don’t think an AGI would necessarily win a war of extermination against us, because currently (emphasize currently) it would need us to run it’s infrastructure. This would change in a world were all industrial tasks could be carried away without physical imput from humans, but we are not there yet and will not be soon.
Thinking of ourselves like chimpanzees while the AI is the humans is really not the right scale: computers operate so much faster than humans, we’d be more like plants than animals to them. When there are all of these “forests” of humans just standing around, one might as well chop them down and use the materials to build something more useful.
This is not exactly a new idea. Yudkowsky already likened the FOOM to setting off a bomb, but the slow-motion video was a new take.
Yes I did, in fact I was active in the comments section.
It’s a good argument and I was somewhat persuaded. However, there are some things to disagree with. For one thing, there is no reason to believe that early AGI actually will be faster or even as fast as humans on any of the tasks that AIs struggle with today. For example, almost all videos of novel robotics applications research are sped up, sometimes hundreds of times. If SayCan can’t deliver a wet sponge in less than a minute, why do we think that early AGI will be able to operate faster than us? (I was going to reply to that post with this objection, but other people beat me too it.)
For inventing nanotechnology, the given example is AlphaFold 2.
For killing everyone in the same instant with nanotechnology, Eliezer often references Nanosystems by Eric Drexler. I haven’t read it, but I expect the insight is something like “Engineered nanomachines could do a lot more than those limited by designs that have a clear evolutionary path from chemicals that can form randomly in the primordial ooze of Earth.”
For how a system could get that smart, the canonical idea is recursive self improvement (i.e. an AGI capable of learning AGI engineering could design better versions of itself, which could in turn better design better versions, etc, to whatever limit.). But more recent history in machine learning suggests you might be able to go from sub-human to overwhelmingly super-human just by giving it a few orders of magnitude more compute, without any design changes.
What is the justification behind the concept of a decisive strategic advantage? Why do we think that a superintelligence can do extraordinary things (hack human minds, invent nanotechnology, conquer the world, kill everyone in the same instant) when nations and corporations can’t do those things?
(Someone else asked a similar question, but I wanted to ask in my own words.)
I think the best justification is by analogy. Humans do not physically have a decisive strategic advantage over other large animals—chimps, lions, elephants, etc. And for hundreds of thousands of years, we were not at the top of the food chain, despite our intelligence. However, intelligence eventually won out, and allowed us to conquer the planet.
Moreover, the benefit of intelligence increased exponentially in proportion to the exponential advance of technology. There was a long, slow burn, followed by what (on evolutionary timescales) was an extremely “fast takeoff”: a very rapid improvement in technology (and thus power) over only a few hundred years. Technological progress is now so rapid that human minds have trouble keeping up within a single lifetime, and genetic evolution has been left in the dust.
That’s the world into which AGI will enter—a technological world in which a difference in intellectual ability can be easily translated into a difference in technological ability, and thus power. Any future technologies that the laws of physics don’t explicitly prohibit, we must assume that an AGI will master faster than we can.
Some one else already commented on how human intelligence gave us a decisive strategic advantage over our natural predators and many environmental threats. I think this cartoon is my mental shorthand for that transition. The timescale is on the order of 10k-100k years, given human intelligence starting from the ancestral environment.
Empires and nations, in turn, conquered the world by taking it away from city-states and similarly smaller entities in ~1k-10k years. The continued existence of Singapore and the Sentinel Islanders doesn’t change the fact that a modern large nation could wipe them out in a handful of years, at most, if we really wanted to. We don’t because doing so is not useful, but the power exists.
Modern corporations don’t want to control the whole world. Like Fnargl, that’s not what they’re pointed at. But it only took a few decades for Walmart to displace a huge swath of the formerly-much-more-local retail market, and even fewer decades for Amazon to repeat a similar feat online, each starting from a good set of ideas and a much smaller resource base than even the smallest nations. And while corporations are militarily weak, they have more than enough economic power to shape the laws of at least some of the nations that host them in ways that let them accumulate more power over time.
So when I look at history, I see a series of major displacements of older systems by newer ones, on faster and faster timescales, using smaller and smaller fractions of our total resource base, all driven by our accumulation of better ideas and using those ideas to accumulate wealth and power. All of this has been done with brains no smarter, natively, than what we had 10k years ago—there hasn’t been time for biological evolution to do much, there. So why should that pattern suddenly stop being true when we introduce a new kind of entity with even better ideas than the best strategies humans have ever come up with? Especially when human minds have already demonstrated a long list of physically-possible scenarios that might, if enacted, kill everyone in a short span of time, or at least disrupt us enough to buy time to mop up the survivors?
Here’s a youtube video about it.
Having watched the video, I can’t say I’m convinced. I’m 50⁄50 on whether DSA is actually possible with any level of intelligence at all. If it isn’t possible, then doom isn’t likely (not impossible, but unlikely), in my view.
This post by the director of OpenPhil argues that even a human level AI could achieve DSA, with coordination.
tldw: corporation are as slow/slower than humans, AIs can be much faster
Thanks, love Robert Miles.
The informal way I think about it:
What would I do if I was the AI, but I had 100 copies of myself, and we had 100 years to think for every 1 second that passed in reality. And I had internet access.
Do you think you could take over the world from that opening?
Edit: And I have access to my own source code, but I only dare do things like fix my motivational problems and make sure I don’t get board during all that time, things like that.
Do you dispute that this is possible in principle or just that we won’t get AI that powerful or something else?
It seems to me that there is some level of intelligence, at which an agent is easily able out-compete the whole rest of human civilization. What exactly that level of intelligence is, is somewhat unclear (in large part because we don’t really have a principled way to measure “intelligence” in general: psychometrics describe variation in human cognitive abilities, but that doesn’t really give us a measuring stick for thinking about how “intelligent”, in general, something is).
Does that seem right to you, or should we back up and build out why that seems true to me?
This is the statement I disagree with, in particular the word “easily”. I guess the crux of this debate is how powerful we think any level of intelligence is. There has to be some limits, in the same way that even the most wealthy people in history could not forestall their own deaths no matter how much money or medical expertise was applied.
I’m not compelled by that analogy. There are lots of things that money can’t buy, but that (sufficient) intelligence can.
There are theoretical limits to what cognition is able to do, but those are so far from the human range that they’re not really worth mentioning. The question is: “are there practical limits to what an intelligence can do, that leave even a super-intelligence uncommunicative with human civilization?”
It seems to me that as an example, you could just take a particularly impressive person (Elon Musk or John Von Neuman are popular exemplars) and ask “What if there was a nation of only people who were that capable?” It seems that if a nation of say 300,000,000 Elon Musks went to war with the United States, the United States would loose handily. Musktopia would just have a huge military-technological advantage: they would do fundamental science faster, and develop engineering innovations faster, and have better operational competence than the US, on ~ all levels. (I think this is true for a much smaller number than 300,000,000, having a number that high makes the point straightforward.)
Does that seem right to you? If not, why not?
Or alternatively, what do you make of vignettes like That Alien Message?
I don’t think a nation of Musks would win against current USA because Musk is optimised for some things (making an absurd amount of money, CEOing, twitting his shower thoughts), but an actual war requires a rather more diverse set of capacity.
Similarly, I don’t think an AGI would necessarily win a war of extermination against us, because currently (emphasize currently) it would need us to run it’s infrastructure. This would change in a world were all industrial tasks could be carried away without physical imput from humans, but we are not there yet and will not be soon.
Did you see the new one about Slow motion videos as AI risk intuition pumps?
Thinking of ourselves like chimpanzees while the AI is the humans is really not the right scale: computers operate so much faster than humans, we’d be more like plants than animals to them. When there are all of these “forests” of humans just standing around, one might as well chop them down and use the materials to build something more useful.
This is not exactly a new idea. Yudkowsky already likened the FOOM to setting off a bomb, but the slow-motion video was a new take.
Yes I did, in fact I was active in the comments section.
It’s a good argument and I was somewhat persuaded. However, there are some things to disagree with. For one thing, there is no reason to believe that early AGI actually will be faster or even as fast as humans on any of the tasks that AIs struggle with today. For example, almost all videos of novel robotics applications research are sped up, sometimes hundreds of times. If SayCan can’t deliver a wet sponge in less than a minute, why do we think that early AGI will be able to operate faster than us? (I was going to reply to that post with this objection, but other people beat me too it.)
Those limits don’t have to be nearby, or look ‘reasonable’, or be inside what you can imagine.
Part of the implicit background for the general AI safety argument is a sense for how minds could be, and that the space of possible minds is large and unaccountably alien. Eliezer spent some time trying to communicate this in the sequences: https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general, https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message.
This is the sequence post on it: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message, it’s quite a fun read (to me), and should explain why something smart that thinks at transistor speeds should be able to figure things out.
For inventing nanotechnology, the given example is AlphaFold 2.
For killing everyone in the same instant with nanotechnology, Eliezer often references Nanosystems by Eric Drexler. I haven’t read it, but I expect the insight is something like “Engineered nanomachines could do a lot more than those limited by designs that have a clear evolutionary path from chemicals that can form randomly in the primordial ooze of Earth.”
For how a system could get that smart, the canonical idea is recursive self improvement (i.e. an AGI capable of learning AGI engineering could design better versions of itself, which could in turn better design better versions, etc, to whatever limit.). But more recent history in machine learning suggests you might be able to go from sub-human to overwhelmingly super-human just by giving it a few orders of magnitude more compute, without any design changes.