All you have to do is simultaneously contact, say, 400 people, and at least one of them will fall for it.
But at what point does it decide to do so? It won’t be a master of dark arts and social engineering from the get-go. So how does it acquire the initial talent without making any mistakes that reveal its malicious intentions? And once it became a master of deception, how does it hide the rough side effects of its large scale conspiracy, e.g. its increased energy consumption and data traffic? I mean, I would personally notice if my PC suddenly and unexpectedly used 20% of my bandwidth and the CPU load would increase for no good reason.
You might say that a global conspiracy to build and acquire advanced molecular nanotechnology to take over the world doesn’t use much resources and they can easily be cloaked as thinking about how to solve some puzzle, but that seems rather unlikely. After all, such a large scale conspiracy is a real-world problem with lots of unpredictable factors and the necessity of physical intervention.
All you have to do is simultaneously contact, say, 400 people, and at least one of them will fall for it.
But at what point does it decide to do so? It won’t be a master of dark arts and social engineering from the get-go. So how does it acquire the initial talent without making any mistakes that reveal its malicious intentions?
Most of your questions have answers that follow from asking analogous questions about past human social engineers, ie Hitler.
Your questions seem to come from the perspective that the AI will be some disembodied program in a box that has little significant interaction with humans.
In the scenario I was considering, the AI’s will have a development period analogous to human childhood. During this childhood phase the community of AIs will learn of humans through interaction in virtual video game environments and experiment with social manipulation, just as human children do. The latter phases of this education can be sped up dramatically as the AI’s accelerate and interact increasingly amongst themselves. The anonymous nature of virtual online communites makes potentially dangerous, darker experiments much easier.
However, the important questions to ask are not of the form: how would these evil AIs learn how to manipulate us while hiding their true intentions for so long? but rather how could some of these AI children which initially seemed so safe later develop into evil sociopaths?
I would not consider a child AI that tries a bungling lie at me to see what I do “so safe”. I would immediately shut it down and debug it, at best, or write a paper on why the approach I used should never ever be used to build an AI.
And it WILL make a bungling lie at first. It can’t learn the need to be subtle without witnessing the repercussions of not being subtle. Nor would have a reason to consider doing social experiments in chat rooms when it doesn’t understand chat rooms and has an engineer willing to talk to it right there. That is, assuming I was dumb enough to give it an unfiltered Internet connection, which I don’t know why I would be. At very least the moment it goes on chat rooms my tracking devices should discover this and I could witness its bungling lies first hand.
(It would not think to fool my tracking device or even consider the existence of such a thing without a good understanding of human psychology to begin with)
But at what point does it decide to do so? It won’t be a master of dark arts and social engineering from the get-go. So how does it acquire the initial talent without making any mistakes that reveal its malicious intentions? And once it became a master of deception, how does it hide the rough side effects of its large scale conspiracy, e.g. its increased energy consumption and data traffic? I mean, I would personally notice if my PC suddenly and unexpectedly used 20% of my bandwidth and the CPU load would increase for no good reason.
You might say that a global conspiracy to build and acquire advanced molecular nanotechnology to take over the world doesn’t use much resources and they can easily be cloaked as thinking about how to solve some puzzle, but that seems rather unlikely. After all, such a large scale conspiracy is a real-world problem with lots of unpredictable factors and the necessity of physical intervention.
Most of your questions have answers that follow from asking analogous questions about past human social engineers, ie Hitler.
Your questions seem to come from the perspective that the AI will be some disembodied program in a box that has little significant interaction with humans.
In the scenario I was considering, the AI’s will have a development period analogous to human childhood. During this childhood phase the community of AIs will learn of humans through interaction in virtual video game environments and experiment with social manipulation, just as human children do. The latter phases of this education can be sped up dramatically as the AI’s accelerate and interact increasingly amongst themselves. The anonymous nature of virtual online communites makes potentially dangerous, darker experiments much easier.
However, the important questions to ask are not of the form: how would these evil AIs learn how to manipulate us while hiding their true intentions for so long? but rather how could some of these AI children which initially seemed so safe later develop into evil sociopaths?
I would not consider a child AI that tries a bungling lie at me to see what I do “so safe”. I would immediately shut it down and debug it, at best, or write a paper on why the approach I used should never ever be used to build an AI.
And it WILL make a bungling lie at first. It can’t learn the need to be subtle without witnessing the repercussions of not being subtle. Nor would have a reason to consider doing social experiments in chat rooms when it doesn’t understand chat rooms and has an engineer willing to talk to it right there. That is, assuming I was dumb enough to give it an unfiltered Internet connection, which I don’t know why I would be. At very least the moment it goes on chat rooms my tracking devices should discover this and I could witness its bungling lies first hand.
(It would not think to fool my tracking device or even consider the existence of such a thing without a good understanding of human psychology to begin with)