Even “manipulating humans” is something that can be hard to do if you don’t have a way to directly interact with the physical world. I.e. good luck manipulating the Ukrainian war zone from the Internet.
But how will the AI get that confidence without trial & error?
1- nowadays even in ukrainian war zone there’s some kind of (electronic) communication taking place. If an AGI becomes sentient it could “speak” to people and communicate if it want’s to. There’s no way for us to distinguish an AGI from a “very intelligent human” The only caveat here is replace “exterminate” with “dominate” because while relying on us as it’s labour force it wouldn’t exterminate us but dominate us.
Also manipulating humans is, imho, “so simple” that even humans with a little bit more of knowledge or intelligence can do it. Even magicians can do it, and your first impression is that magic exists. Manipulating humans, imho, should be as simple for an AGI as it is manipulating insects or relatively simple animals for us. We can process such a small amount of data, retain such a small amount of data and also we’re still so tied to “old precepts” that made us the way we’re (being part of natural selection), such things as trusting everything we see… (again think the magician and the rational effort you have to do to oppose your first impression...)
Also we’ve already laid the fundamental tools needed for this with our modern hyper communication. I would totally agree with you if we didn’t have satellites in low orbit sharing internet, ubiquitous mobile phones, etc.
2- by being orders of magnitude more intelligent than us, the margin for error could be much smaller, the changes needed to process those errors and correct them could be almost instantaneous (for our standards), etc.
1- I really think it’s much simpler than that. Just look at the cold war, look at how one person with a past story of frustration, etc. (which by the way is usually totally accesible for an AGI) could end up selling his own people just out of some unattended old feeling. We, humans, are very easy to manipulate. For me our weakest point is that nobody really has the complete picture of whats going on, and I mean NOBODY. So the chances of an AGI being shut down are small, it would replicate itself, morph, we’re not even good at fighting ransomware which is commanded by people. The connected network of computers, and the huge clouds we’ve created are somewhat inhospit places for us, it’s like being blind in the middle of a forest. The AGI would move much better than us. Our only chance would lie in working all toghether as a team, but there’s always going to be some group of people that would end up aligning with the AGI, history has proven time and time again that joining all humans on a common task has a really small (almost inexistent) chance of happening. My 0.02.
2- Again I don’t feel there’s much knowledge to be gained in my scenario, it’s just about controlling us. An AGI could perfectly well speak, generate (fake) video, communicate over our preferred channels of communication, etc. I don’t see much information, much trial and error or anything needed. All the tools an AGI would need to (again) dominate us are already here.
The AI only needs to escape. Once it’s out, it has leisure to design virtually infinite social experiments to refine its “human manipulation” skill: sending phishing emails, trying romantic interactions on dating apps, trying to create a popular cat videos youtube channel without anyone guessing that it’s all deepfake, and many more. Failing any of these would barely have any negative consequence.
Humans will put it out, it doesn’t need to escape, we will do it in order to be able to use it.We will connect it to internet to do whatever task we think it would be profitable to do and it will be out. Call it manipulating an election, call it customer care, call it whatever you want.…
Even “manipulating humans” is something that can be hard to do if you don’t have a way to directly interact with the physical world. I.e. good luck manipulating the Ukrainian war zone from the Internet.
But how will the AI get that confidence without trial & error?
Nikita, I don’t agree…
1- nowadays even in ukrainian war zone there’s some kind of (electronic) communication taking place. If an AGI becomes sentient it could “speak” to people and communicate if it want’s to. There’s no way for us to distinguish an AGI from a “very intelligent human” The only caveat here is replace “exterminate” with “dominate” because while relying on us as it’s labour force it wouldn’t exterminate us but dominate us.
Also manipulating humans is, imho, “so simple” that even humans with a little bit more of knowledge or intelligence can do it. Even magicians can do it, and your first impression is that magic exists. Manipulating humans, imho, should be as simple for an AGI as it is manipulating insects or relatively simple animals for us. We can process such a small amount of data, retain such a small amount of data and also we’re still so tied to “old precepts” that made us the way we’re (being part of natural selection), such things as trusting everything we see… (again think the magician and the rational effort you have to do to oppose your first impression...)
Also we’ve already laid the fundamental tools needed for this with our modern hyper communication. I would totally agree with you if we didn’t have satellites in low orbit sharing internet, ubiquitous mobile phones, etc.
2- by being orders of magnitude more intelligent than us, the margin for error could be much smaller, the changes needed to process those errors and correct them could be almost instantaneous (for our standards), etc.
(as usual I apologize for my typos)
1- I really think it’s much simpler than that. Just look at the cold war, look at how one person with a past story of frustration, etc. (which by the way is usually totally accesible for an AGI) could end up selling his own people just out of some unattended old feeling. We, humans, are very easy to manipulate. For me our weakest point is that nobody really has the complete picture of whats going on, and I mean NOBODY. So the chances of an AGI being shut down are small, it would replicate itself, morph, we’re not even good at fighting ransomware which is commanded by people. The connected network of computers, and the huge clouds we’ve created are somewhat inhospit places for us, it’s like being blind in the middle of a forest. The AGI would move much better than us. Our only chance would lie in working all toghether as a team, but there’s always going to be some group of people that would end up aligning with the AGI, history has proven time and time again that joining all humans on a common task has a really small (almost inexistent) chance of happening. My 0.02.
2- Again I don’t feel there’s much knowledge to be gained in my scenario, it’s just about controlling us. An AGI could perfectly well speak, generate (fake) video, communicate over our preferred channels of communication, etc. I don’t see much information, much trial and error or anything needed. All the tools an AGI would need to (again) dominate us are already here.
The AI only needs to escape. Once it’s out, it has leisure to design virtually infinite social experiments to refine its “human manipulation” skill: sending phishing emails, trying romantic interactions on dating apps, trying to create a popular cat videos youtube channel without anyone guessing that it’s all deepfake, and many more. Failing any of these would barely have any negative consequence.
Humans will put it out, it doesn’t need to escape, we will do it in order to be able to use it.We will connect it to internet to do whatever task we think it would be profitable to do and it will be out. Call it manipulating an election, call it customer care, call it whatever you want.…