There doesn’t seem to be a lot to be scared about here—given a goal of encryption, the AI will make encryption. This … should be expected.
Not given a goal of encryption, the AI will … ? That’s a more interesting answer, and that’s what you seem to be scared about. But it’s not what the article is talking about, and as such, this shouldn’t scare you.
The interesting / slightly scary part is that we have no idea on how the encryption is done, because NN are totally opaque with their algorithms. So we have an encryption that works, but we cannot even understand it. Now it seems a bit scarier?
So we have an encryption that works, but we cannot even understand it.
It works in the sense that a neural net can’t break it. It likely doesn’t work in the sense that a human can’t break it.
When your goal is choosing a safe algorithm you want the algorithm to be able to be understood well enough that you can prove that the algorithm is resistant against various attacks.
we have an encryption that works, but we cannot even understand it
Huh? First, NNs are not opaque in the sense that you are talking about here. You can take a trained NN and express it as a a function, an algebraic statement. All the terms and the coefficients are out in the open. What you can’t do is ascribe meaning to individual terms or to (easily) evaluate how robust that function is, but that is not “we have no idea on how the encryption is done”.
Second, consider the work of a cryptanalyst. He has to break encryptions he doesn’t understand (yet) and often doesn’t know “how they are done”. I don’t think there were any claims that this encryption scheme is especially secure. Give it to competent cryptanalysts and they will break it.
What you can’t do is ascribe meaning to individual terms or to (easily) evaluate how robust that function is
Yeah, that’s normally what “understand” means.
Second, consider the work of a cryptanalyst. He has to break encryptions he doesn’t understand (yet) and often doesn’t know “how they are done”
Nah, that’s the old “security through obscurity” which is the first target of any cryptoanalyst and usually the least demanding part of the job (usually algorithms are provided by agents in the field). The “fun” part is breaking a known cypher. But here we have a cypher that we cannot make sense of:
We don’t know exactly how the encryption method works, as machine learning provides a solution but not an easy way to understand how it is reached.
I don’t think that in the context of this discussion we care about which part is fun and which is not. Cryptanalysts often assume the knowledge of the cypher because it’s a realistic assumption that the attacker will have it. However there is a variety of techniques which assume only the availability of cyphertext or of both cyphertext and plaintext without knowing what the algorithm is.
we have a cypher that we cannot make sense of
Um, evidence? Did any professional cryptographer say that? Was a new class of cyphers invented? Should we use this cypher for important communications?
Hmmhhhh I dunno. Maybe you got scared by that. I do know the realization you’re talking about though, I had it a couple years ago when I read an article about scientists that tried to see if they could use evolutionary strategies to get a circuitboard with 100 components (or FPGA or something) to process sounds. If it worked, they could just copy the design to other chips and they’d have a really small and cheap sound processing chip!
So they do this, and they help the design a bit by selecting promising designs and cutting out sections which don’t do anything at all, and eventually they have a chip that seems to do pretty well (I don’t know how accurate it was, something like 95%?). So, on to the reveal, how does it work?
Well, there’s like a good 50-70 components being used to process the sounds properly, which is pretty cool, but there’s also a group of 5 components just… doing… nothing. How weird. So they disabled these 5 components (which, I’ll remind you, seemed to be not connected to anything else), and the chip stopped working.
Somehow, the algorithm had made use of the manufacturing flaws in the chip and used them in its design. How it works, they didn’t know. Maybe some electrons jumped the gap somehow. But that showed me how, if you give a such an optimization algorithm a task, it will do that task to a fault, and you will not understand the result.
Same thing here. Create an encryption! Let it sit for a ton of cycles. The result is something that works in an unexpected and as of yet not understood way. I had expected that result, so it didn’t scare me.
But that showed me how, if you give a such an optimization algorithm a task, it will do that task to a fault, and you will not understand the result.
Exactly, and that’s why we call “summoning Azathoth” that process. “Scary” is 95% just tongue in cheek, the other 5% is awe at how puny our brains are and what surprising damage could be done by a rogue algorithm.
I had it a couple years ago when I read an article about scientists that tried to see if they could use evolutionary strategies to get a circuitboard with 100 components (or FPGA or something) to process sounds.
I remember that story but I don’t have a source for it. Does anybody have the source?
What are your friendly AIs going to learn first?
ENCRYPTION!
https://www.newscientist.com/article/2110522-googles-neural-networks-invent-their-own-encryption/
Wow! That’s incredibly interesting, and a little scary.
I wonder if the need for communicating secretely is one of the basic AI drives.
There doesn’t seem to be a lot to be scared about here—given a goal of encryption, the AI will make encryption. This … should be expected.
Not given a goal of encryption, the AI will … ? That’s a more interesting answer, and that’s what you seem to be scared about. But it’s not what the article is talking about, and as such, this shouldn’t scare you.
The interesting / slightly scary part is that we have no idea on how the encryption is done, because NN are totally opaque with their algorithms. So we have an encryption that works, but we cannot even understand it.
Now it seems a bit scarier?
It works in the sense that a neural net can’t break it. It likely doesn’t work in the sense that a human can’t break it.
When your goal is choosing a safe algorithm you want the algorithm to be able to be understood well enough that you can prove that the algorithm is resistant against various attacks.
Huh? First, NNs are not opaque in the sense that you are talking about here. You can take a trained NN and express it as a a function, an algebraic statement. All the terms and the coefficients are out in the open. What you can’t do is ascribe meaning to individual terms or to (easily) evaluate how robust that function is, but that is not “we have no idea on how the encryption is done”.
Second, consider the work of a cryptanalyst. He has to break encryptions he doesn’t understand (yet) and often doesn’t know “how they are done”. I don’t think there were any claims that this encryption scheme is especially secure. Give it to competent cryptanalysts and they will break it.
Yeah, that’s normally what “understand” means.
Nah, that’s the old “security through obscurity” which is the first target of any cryptoanalyst and usually the least demanding part of the job (usually algorithms are provided by agents in the field). The “fun” part is breaking a known cypher.
But here we have a cypher that we cannot make sense of:
I don’t think that in the context of this discussion we care about which part is fun and which is not. Cryptanalysts often assume the knowledge of the cypher because it’s a realistic assumption that the attacker will have it. However there is a variety of techniques which assume only the availability of cyphertext or of both cyphertext and plaintext without knowing what the algorithm is.
Um, evidence? Did any professional cryptographer say that? Was a new class of cyphers invented? Should we use this cypher for important communications?
Hmmhhhh I dunno. Maybe you got scared by that. I do know the realization you’re talking about though, I had it a couple years ago when I read an article about scientists that tried to see if they could use evolutionary strategies to get a circuitboard with 100 components (or FPGA or something) to process sounds. If it worked, they could just copy the design to other chips and they’d have a really small and cheap sound processing chip!
So they do this, and they help the design a bit by selecting promising designs and cutting out sections which don’t do anything at all, and eventually they have a chip that seems to do pretty well (I don’t know how accurate it was, something like 95%?). So, on to the reveal, how does it work?
Well, there’s like a good 50-70 components being used to process the sounds properly, which is pretty cool, but there’s also a group of 5 components just… doing… nothing. How weird. So they disabled these 5 components (which, I’ll remind you, seemed to be not connected to anything else), and the chip stopped working.
Somehow, the algorithm had made use of the manufacturing flaws in the chip and used them in its design. How it works, they didn’t know. Maybe some electrons jumped the gap somehow. But that showed me how, if you give a such an optimization algorithm a task, it will do that task to a fault, and you will not understand the result.
Same thing here. Create an encryption! Let it sit for a ton of cycles. The result is something that works in an unexpected and as of yet not understood way. I had expected that result, so it didn’t scare me.
Exactly, and that’s why we call “summoning Azathoth” that process. “Scary” is 95% just tongue in cheek, the other 5% is awe at how puny our brains are and what surprising damage could be done by a rogue algorithm.
Seems we agree.
I remember that story but I don’t have a source for it. Does anybody have the source?