When I read these AI control problems I always think that an arbitrary human is being conflated with the AI’s human owner. I could be mistaken that I should read these as if AIs own themselves—I don’t see this case likely so I would probably stop here if we are to presuppose this.
Now if an AI is lying/deceiving its owner, this is a bug. In fact, when debugging I often feel I am being lied to. Normal code isn’t a very sophisticated liar. I could see an AI owner wanting to train its AI about lying an deceiving and maybe actually perform them on other people (say a Wall Street AI). Now we have a sophisticated liar but we also have a bug. I find it likely that the owner would have encountered this bug many times while the AI is becoming more and more sophisticated. If he didn’t encounter this bug then it would point to great improvements in software development.
NNs connection to biology is very thin. Artificial neurons don’t look or act like regular neurons at all. But as a coined term to sell your research idea its great.
NNs are popular now for their deep learning properties and ability to learn features from unlabeled data (like edge detection).
Comparing NNs to SVMs isn’t really fair. You use the tool best for the job. If you have lots of labeled data you are more likely to use an SVM. It just depends on what problem you are being asked so solve. And of course you might feed an NNs output into an SVM or vice versa.
As for major achievements—NNs are leading for now because 1) most of the world’s data is unlabeled and 2) automated feature discovery (deep learning) is better then paying people to craft features.