Do you dispute that this is possible in principle or just that we won’t get AI that powerful or something else?
It seems to me that there is some level of intelligence, at which an agent is easily able out-compete the whole rest of human civilization. What exactly that level of intelligence is, is somewhat unclear (in large part because we don’t really have a principled way to measure “intelligence” in general: psychometrics describe variation in human cognitive abilities, but that doesn’t really give us a measuring stick for thinking about how “intelligent”, in general, something is).
Does that seem right to you, or should we back up and build out why that seems true to me?
It seems to me that there is some level of intelligence, at which an agent is easily able out-compete the whole rest of human civilization.
This is the statement I disagree with, in particular the word “easily”. I guess the crux of this debate is how powerful we think any level of intelligence is. There has to be some limits, in the same way that even the most wealthy people in history could not forestall their own deaths no matter how much money or medical expertise was applied.
I’m not compelled by that analogy. There are lots of things that money can’t buy, but that (sufficient) intelligence can.
There are theoretical limits to what cognition is able to do, but those are so far from the human range that they’re not really worth mentioning. The question is: “are there practical limits to what an intelligence can do, that leave even a super-intelligence uncommunicative with human civilization?”
It seems to me that as an example, you could just take a particularly impressive person (Elon Musk or John Von Neuman are popular exemplars) and ask “What if there was a nation of only people who were that capable?” It seems that if a nation of say 300,000,000 Elon Musks went to war with the United States, the United States would loose handily. Musktopia would just have a huge military-technological advantage: they would do fundamental science faster, and develop engineering innovations faster, and have better operational competence than the US, on ~ all levels. (I think this is true for a much smaller number than 300,000,000, having a number that high makes the point straightforward.)
Does that seem right to you? If not, why not?
Or alternatively, what do you make of vignettes like That Alien Message?
I don’t think a nation of Musks would win against current USA because Musk is optimised for some things (making an absurd amount of money, CEOing, twitting his shower thoughts), but an actual war requires a rather more diverse set of capacity.
Similarly, I don’t think an AGI would necessarily win a war of extermination against us, because currently (emphasize currently) it would need us to run it’s infrastructure. This would change in a world were all industrial tasks could be carried away without physical imput from humans, but we are not there yet and will not be soon.
Thinking of ourselves like chimpanzees while the AI is the humans is really not the right scale: computers operate so much faster than humans, we’d be more like plants than animals to them. When there are all of these “forests” of humans just standing around, one might as well chop them down and use the materials to build something more useful.
This is not exactly a new idea. Yudkowsky already likened the FOOM to setting off a bomb, but the slow-motion video was a new take.
Yes I did, in fact I was active in the comments section.
It’s a good argument and I was somewhat persuaded. However, there are some things to disagree with. For one thing, there is no reason to believe that early AGI actually will be faster or even as fast as humans on any of the tasks that AIs struggle with today. For example, almost all videos of novel robotics applications research are sped up, sometimes hundreds of times. If SayCan can’t deliver a wet sponge in less than a minute, why do we think that early AGI will be able to operate faster than us? (I was going to reply to that post with this objection, but other people beat me too it.)
Do you dispute that this is possible in principle or just that we won’t get AI that powerful or something else?
It seems to me that there is some level of intelligence, at which an agent is easily able out-compete the whole rest of human civilization. What exactly that level of intelligence is, is somewhat unclear (in large part because we don’t really have a principled way to measure “intelligence” in general: psychometrics describe variation in human cognitive abilities, but that doesn’t really give us a measuring stick for thinking about how “intelligent”, in general, something is).
Does that seem right to you, or should we back up and build out why that seems true to me?
This is the statement I disagree with, in particular the word “easily”. I guess the crux of this debate is how powerful we think any level of intelligence is. There has to be some limits, in the same way that even the most wealthy people in history could not forestall their own deaths no matter how much money or medical expertise was applied.
I’m not compelled by that analogy. There are lots of things that money can’t buy, but that (sufficient) intelligence can.
There are theoretical limits to what cognition is able to do, but those are so far from the human range that they’re not really worth mentioning. The question is: “are there practical limits to what an intelligence can do, that leave even a super-intelligence uncommunicative with human civilization?”
It seems to me that as an example, you could just take a particularly impressive person (Elon Musk or John Von Neuman are popular exemplars) and ask “What if there was a nation of only people who were that capable?” It seems that if a nation of say 300,000,000 Elon Musks went to war with the United States, the United States would loose handily. Musktopia would just have a huge military-technological advantage: they would do fundamental science faster, and develop engineering innovations faster, and have better operational competence than the US, on ~ all levels. (I think this is true for a much smaller number than 300,000,000, having a number that high makes the point straightforward.)
Does that seem right to you? If not, why not?
Or alternatively, what do you make of vignettes like That Alien Message?
I don’t think a nation of Musks would win against current USA because Musk is optimised for some things (making an absurd amount of money, CEOing, twitting his shower thoughts), but an actual war requires a rather more diverse set of capacity.
Similarly, I don’t think an AGI would necessarily win a war of extermination against us, because currently (emphasize currently) it would need us to run it’s infrastructure. This would change in a world were all industrial tasks could be carried away without physical imput from humans, but we are not there yet and will not be soon.
Did you see the new one about Slow motion videos as AI risk intuition pumps?
Thinking of ourselves like chimpanzees while the AI is the humans is really not the right scale: computers operate so much faster than humans, we’d be more like plants than animals to them. When there are all of these “forests” of humans just standing around, one might as well chop them down and use the materials to build something more useful.
This is not exactly a new idea. Yudkowsky already likened the FOOM to setting off a bomb, but the slow-motion video was a new take.
Yes I did, in fact I was active in the comments section.
It’s a good argument and I was somewhat persuaded. However, there are some things to disagree with. For one thing, there is no reason to believe that early AGI actually will be faster or even as fast as humans on any of the tasks that AIs struggle with today. For example, almost all videos of novel robotics applications research are sped up, sometimes hundreds of times. If SayCan can’t deliver a wet sponge in less than a minute, why do we think that early AGI will be able to operate faster than us? (I was going to reply to that post with this objection, but other people beat me too it.)
Those limits don’t have to be nearby, or look ‘reasonable’, or be inside what you can imagine.
Part of the implicit background for the general AI safety argument is a sense for how minds could be, and that the space of possible minds is large and unaccountably alien. Eliezer spent some time trying to communicate this in the sequences: https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general, https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message.