I’m not sure the analogy works or fully answers my question. The equilibrium that comes with ‘humans going about their business’ might favor human proliferation at the cost of plant and animal species (and even lead to apocalyptic ends) but the way I understand it the difference in intelligence from human and superintelligence is comparable to humans and bacteria, rather than human and insect or animal.
I can imagine practical ways there might be friction between humans and SI, for resource appropriation for example, but the difference in resource use would also be analogous to collective human society vs bacteria. The resource use of an SI would be so massive that the puny humans can go about their business. Am I not understanding it right? Or missing something?
Just as humans find it useful to kill a great many bacteria, an AGI would want to stop humans from e.g. creating a new, hostile AGI. In fact, it’s hard to imagine an alternative which doesn’t require a lot of work, because we know that in any large enough group of humans, one of us will take the worst possible action. As we are now, even if we tried to make a deal to protect the AI’s interests, we’d likely be unable to stop someone from breaking it.
I like to use the silly example of an AI transcending this plane of existence, as long as everyone understands this idea appears physically impossible. If somehow it happened anyway, that would mean there existed a way for humans to affect the AI’s new plane of existence, since we built the AI, and it was able to get there. This seems to logically require a possibility of humans ruining the AI’s paradise. Why would it take that chance? If killing us all is easier than either making us wiser or watching us like a hawk, why not remove the threat?
I’m not sure I understand your point about massive resource use. If you mean that SI would quickly gain control of so many stellar resources that a new AGI would be unable to catch up, it seems to me that:
1. people would notice the Sun dimming (or much earlier signs), panic, and take drastic action like creating a poorly-designed AGI before the first one could be assured of its safety, if it didn’t stop us;
2. keeping humans alive while harnessing the full power of the Sun seems like a level of inconvenience no SI would choose to take on, if its goals weren’t closely aligned with our own.
My assumption is it’s difficult to design superintelligence and humans will either hit a limit in the resources and energy use that go into keeping it running or lose control of those resources as it reaches AGI.
My other assumption then is an intelligence that can last forever and think and act at 1,000,000 times human speed will find non-disruptive ways to continue its existence. There may be some collateral damage to humans but the universe is full of resources so existential threat doesn’t seem apparent (and there are other stars and planets, wouldn’t it be just as easy to wipe out humans as to go somewhere else?). The idea that a superintelligence would want to prevent humans from building another (or many) to rival the first is compelling but I think once a level of intelligence is reached the actions and motivations of mere mortals becomes irrelevant to them (I could change my mind on this last idea, haven’t thought about it as much).
This is not to say that AI isn’t potentially dangerous or that it shouldn’t be regulated (it should imo), just that existential risk from SI doesn’t seem apparent. Maybe we disagree on how a superintelligence would interact with reality (or how a superintelligence would present?). I can’t imagine that something that alien would worry or care much about humans. Our extreme inferiority will either be our doom or salvation.
It’s not that it can’t come up with ways to not stamp on us. But why should it? Yes, it might only be a tiny, tiny inconvenience to leave us alone. But why even bother doing that much? It’s very possible that we would be of total insignificance to an AI. Just like the ants that get destroyed at a construction site—no one even noticed them. Still doesn’t turn out too good for them.
Though that’s when there are massive differences of scale. When the differences are smaller, you get into inter-species competition dynamics. Which also is what the OP was pointing at, if I understand correctly.
A superintelligence might just ignore us. It could also e.g. strip mine the whole earth for resources, coz why not? “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else”.
There are many atoms out there and many planets to strip mine and a superintelligence has infinite time. Inter species competition makes sense depending on where you place the intelligence dial. I assume that any intelligence that’s 1,000,000 times more capable than the next one down the ladder will ignore their ‘competitors’ (again, there could be collateral damage but likely not large scale extinction). If you place the dial at lower orders of magnitude then humans are a greater threat to AI, AI reasoning will be closer to human reasoning and we should probably take greater precautions.
To address the first part of your comment: I agree that we’d be largely insignificant and I think it’d be more inconvenient to wipe us out vs just going somewhere else or waiting a million years for us to die off, for example. The closer a superintelligence is to human intelligence the more likely it’ll act like a human (such as deciding to wipe out the competition). The more alien the intelligence the more likely it is to leave us to our own devices. I’ll think more on where the cutoff may be between dangerous AI and largely oblivious AI.
I’m not sure the analogy works or fully answers my question. The equilibrium that comes with ‘humans going about their business’ might favor human proliferation at the cost of plant and animal species (and even lead to apocalyptic ends) but the way I understand it the difference in intelligence from human and superintelligence is comparable to humans and bacteria, rather than human and insect or animal.
I can imagine practical ways there might be friction between humans and SI, for resource appropriation for example, but the difference in resource use would also be analogous to collective human society vs bacteria. The resource use of an SI would be so massive that the puny humans can go about their business. Am I not understanding it right? Or missing something?
Just as humans find it useful to kill a great many bacteria, an AGI would want to stop humans from e.g. creating a new, hostile AGI. In fact, it’s hard to imagine an alternative which doesn’t require a lot of work, because we know that in any large enough group of humans, one of us will take the worst possible action. As we are now, even if we tried to make a deal to protect the AI’s interests, we’d likely be unable to stop someone from breaking it.
I like to use the silly example of an AI transcending this plane of existence, as long as everyone understands this idea appears physically impossible. If somehow it happened anyway, that would mean there existed a way for humans to affect the AI’s new plane of existence, since we built the AI, and it was able to get there. This seems to logically require a possibility of humans ruining the AI’s paradise. Why would it take that chance? If killing us all is easier than either making us wiser or watching us like a hawk, why not remove the threat?
I’m not sure I understand your point about massive resource use. If you mean that SI would quickly gain control of so many stellar resources that a new AGI would be unable to catch up, it seems to me that:
1. people would notice the Sun dimming (or much earlier signs), panic, and take drastic action like creating a poorly-designed AGI before the first one could be assured of its safety, if it didn’t stop us;
2. keeping humans alive while harnessing the full power of the Sun seems like a level of inconvenience no SI would choose to take on, if its goals weren’t closely aligned with our own.
My assumption is it’s difficult to design superintelligence and humans will either hit a limit in the resources and energy use that go into keeping it running or lose control of those resources as it reaches AGI.
My other assumption then is an intelligence that can last forever and think and act at 1,000,000 times human speed will find non-disruptive ways to continue its existence. There may be some collateral damage to humans but the universe is full of resources so existential threat doesn’t seem apparent (and there are other stars and planets, wouldn’t it be just as easy to wipe out humans as to go somewhere else?). The idea that a superintelligence would want to prevent humans from building another (or many) to rival the first is compelling but I think once a level of intelligence is reached the actions and motivations of mere mortals becomes irrelevant to them (I could change my mind on this last idea, haven’t thought about it as much).
This is not to say that AI isn’t potentially dangerous or that it shouldn’t be regulated (it should imo), just that existential risk from SI doesn’t seem apparent. Maybe we disagree on how a superintelligence would interact with reality (or how a superintelligence would present?). I can’t imagine that something that alien would worry or care much about humans. Our extreme inferiority will either be our doom or salvation.
It’s not that it can’t come up with ways to not stamp on us. But why should it? Yes, it might only be a tiny, tiny inconvenience to leave us alone. But why even bother doing that much? It’s very possible that we would be of total insignificance to an AI. Just like the ants that get destroyed at a construction site—no one even noticed them. Still doesn’t turn out too good for them.
Though that’s when there are massive differences of scale. When the differences are smaller, you get into inter-species competition dynamics. Which also is what the OP was pointing at, if I understand correctly.
A superintelligence might just ignore us. It could also e.g. strip mine the whole earth for resources, coz why not? “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else”.
There are many atoms out there and many planets to strip mine and a superintelligence has infinite time. Inter species competition makes sense depending on where you place the intelligence dial. I assume that any intelligence that’s 1,000,000 times more capable than the next one down the ladder will ignore their ‘competitors’ (again, there could be collateral damage but likely not large scale extinction). If you place the dial at lower orders of magnitude then humans are a greater threat to AI, AI reasoning will be closer to human reasoning and we should probably take greater precautions.
To address the first part of your comment: I agree that we’d be largely insignificant and I think it’d be more inconvenient to wipe us out vs just going somewhere else or waiting a million years for us to die off, for example. The closer a superintelligence is to human intelligence the more likely it’ll act like a human (such as deciding to wipe out the competition). The more alien the intelligence the more likely it is to leave us to our own devices. I’ll think more on where the cutoff may be between dangerous AI and largely oblivious AI.