Hello. I am Joseph Van Name. I have been on this site substantially this year, but I never introduced myself. I have a Ph.D. in Mathematics, and I am a cryptocurrency creator. I am currently interested in using LSRDRs and related constructions to interpret ML models and to produce more interpretable ML models.
Let A1,…,Ar∈Mn(R) be n×n-matrices. Define a fitness function LA1,…,Ar;d:Md(R)r→[0,∞) by letting LA1,…,Ar;d(X1,…,Xr)=ρ(A1⊗X1+⋯+Ar⊗Xr)ρ(X1⊗X1+⋯+Xr⊗Xr)1/2 (ρ(A) denotes the spectral radius of A while A⊗B is the tensor product of A and B). If (X1,…,Xr) locally maximizes LA1,…,Ar;d(X1,…,Xr), then we say that (X1,…,Xr) is an L2,d-spectral radius dimensionality reduction (LSRDR) of (A1,…,Ar) (and we can generalize the notion of an LSRDR to a complex and quaternionic setting).
I originally developed the notion of an LSRDR in order to evaluate the cryptographic security of block ciphers such as the AES block cipher or small block ciphers, but it seems like LSRDRs have much more potential in machine learning and interpretability. I hope that LSRDRs and similar constructions can make our AI safer (LSRDRs do not seem directly dangerous at the moment since they currently cannot replace neural networks).
I’m curious and I’ve been thinking about some opportunities for cryptanalysis to contribute to QA for ML products, particularly in the interp area. But I’ve never looked at spectral methods or thought about them at all before! At a glance it seems promising. I’d love to see more from you on this.
I will certainly make future posts about spectral methods in AI since spectral methods are already really important in ML, and it appears that new and innovative spectral methods will help improve AI and especially AI interpretability (but I do not see them replacing deep neural networks though). I am sure that LSRDRs can be used for QA for ML products and for interpretability (but it is too early to say much about the best practices for LSRDRs for interpreting ML). I don’t think I have too much to say at the moment about other cryptanalytic tools being applied to ML though (except for general statistical tests, but other mathematicians have just as much to say about these other tests).
Added 8/4/2023: On a second thought, people on this site really seem to dislike mathematics (they seem to love discrediting themselves). I should probably go elsewhere where I can have higher quality discussions with better people.
Hello. I am Joseph Van Name. I have been on this site substantially this year, but I never introduced myself. I have a Ph.D. in Mathematics, and I am a cryptocurrency creator. I am currently interested in using LSRDRs and related constructions to interpret ML models and to produce more interpretable ML models.
Let A1,…,Ar∈Mn(R) be n×n-matrices. Define a fitness function LA1,…,Ar;d:Md(R)r→[0,∞) by letting LA1,…,Ar;d(X1,…,Xr)=ρ(A1⊗X1+⋯+Ar⊗Xr)ρ(X1⊗X1+⋯+Xr⊗Xr)1/2 (ρ(A) denotes the spectral radius of A while A⊗B is the tensor product of A and B). If (X1,…,Xr) locally maximizes LA1,…,Ar;d(X1,…,Xr), then we say that (X1,…,Xr) is an L2,d-spectral radius dimensionality reduction (LSRDR) of (A1,…,Ar) (and we can generalize the notion of an LSRDR to a complex and quaternionic setting).
I originally developed the notion of an LSRDR in order to evaluate the cryptographic security of block ciphers such as the AES block cipher or small block ciphers, but it seems like LSRDRs have much more potential in machine learning and interpretability. I hope that LSRDRs and similar constructions can make our AI safer (LSRDRs do not seem directly dangerous at the moment since they currently cannot replace neural networks).
I’m curious and I’ve been thinking about some opportunities for cryptanalysis to contribute to QA for ML products, particularly in the interp area. But I’ve never looked at spectral methods or thought about them at all before! At a glance it seems promising. I’d love to see more from you on this.
I will certainly make future posts about spectral methods in AI since spectral methods are already really important in ML, and it appears that new and innovative spectral methods will help improve AI and especially AI interpretability (but I do not see them replacing deep neural networks though). I am sure that LSRDRs can be used for QA for ML products and for interpretability (but it is too early to say much about the best practices for LSRDRs for interpreting ML). I don’t think I have too much to say at the moment about other cryptanalytic tools being applied to ML though (except for general statistical tests, but other mathematicians have just as much to say about these other tests).
Added 8/4/2023: On a second thought, people on this site really seem to dislike mathematics (they seem to love discrediting themselves). I should probably go elsewhere where I can have higher quality discussions with better people.