I agree with this post—like Eliezer says, it’s unlikely that the battle of AI vs humanity would come in the form of humanoid robots vs humans like in Terminator, more likely it would be far more boring and subtle. I also think that one of the key vectors of attack for an AI is the psychological fallibility of humans. An AI that is really good at pattern recognition (i.e. most AIs) would probably have little issue with finding out your vulnerabilities just from observing your behavior or even your social media posts. You could probably figure out whether someone is highly empathetic (vulnerable to emotional blackmail) or low-IQ (vulnerable to trickery) pretty easily by reading their writing. There are already examples of programmers who fell in love with AI and were ready to do its bidding. From there, if you manipulate a rich person or someone who’s otherwise in a position of power, you can do a lot to covertly set up a losing position for humanity.
I agree with this post—like Eliezer says, it’s unlikely that the battle of AI vs humanity would come in the form of humanoid robots vs humans like in Terminator, more likely it would be far more boring and subtle. I also think that one of the key vectors of attack for an AI is the psychological fallibility of humans. An AI that is really good at pattern recognition (i.e. most AIs) would probably have little issue with finding out your vulnerabilities just from observing your behavior or even your social media posts. You could probably figure out whether someone is highly empathetic (vulnerable to emotional blackmail) or low-IQ (vulnerable to trickery) pretty easily by reading their writing. There are already examples of programmers who fell in love with AI and were ready to do its bidding. From there, if you manipulate a rich person or someone who’s otherwise in a position of power, you can do a lot to covertly set up a losing position for humanity.