In this note, I will continue to demonstrate not only the ways in which LSRDRs (L2,d-spectral radius dimensionality reduction) are mathematical but also how one can get the most out of LSRDRs. LSRDRs are one of the types of machine learning that I have been working on, and LSRDRs have characteristics that tell us that LSRDRs are often inherently interpretable which should be good for AI safety.
Suppose that N is the quantum channel that maps a n qubit state to a n qubit state where we select one of the 6 qubits at random and send it through the completely depolarizing channel (the completely depolarizing channel takes a state as an input and returns the completely mixed state as an output). Suppose that A1,…,A4n are 2n by 2n matrices where N has the Kraus representation N(X)=∑4nk=1AkXA∗k.
The objective is to locally maximize the fitness level ρ(∑4nk=1zkAk)/∥(z1,…,z4n)∥ where the norm in question is the Euclidean norm and where ρ denotes the spectral radius. This is a 1 dimensional case of an LSRDR of the channel N.
Let A=∑4nk=1zkAk when (z1,…,z4n) is selected to locally maximize the fitness level. Then my empirical calculations show that there is some λ where λ∑4nk=1zkAkis positive semidefinite with eigenvalues {0,…,n} and where the eigenvalue k has multiplicity (nk) which is the binomial coefficient. But these are empirical calculations for select values λ; I have not been able to mathematically prove that this is always the case for all local maxima for the fitness level (I have not tried to come up with a proof).
Here, we have obtained a complete characterization of A up-to-unitary equivalence due to the spectral theorem, so we are quite close to completely interpreting the local maximum for our fitness function.
I made a few YouTube videos showcasing the process of maximizing the fitness level here.
In this note, I will continue to demonstrate not only the ways in which LSRDRs (L2,d-spectral radius dimensionality reduction) are mathematical but also how one can get the most out of LSRDRs. LSRDRs are one of the types of machine learning that I have been working on, and LSRDRs have characteristics that tell us that LSRDRs are often inherently interpretable which should be good for AI safety.
Suppose that N is the quantum channel that maps a n qubit state to a n qubit state where we select one of the 6 qubits at random and send it through the completely depolarizing channel (the completely depolarizing channel takes a state as an input and returns the completely mixed state as an output). Suppose that A1,…,A4n are 2n by 2n matrices where N has the Kraus representation N(X)=∑4nk=1AkXA∗k.
The objective is to locally maximize the fitness level ρ(∑4nk=1zkAk)/∥(z1,…,z4n)∥ where the norm in question is the Euclidean norm and where ρ denotes the spectral radius. This is a 1 dimensional case of an LSRDR of the channel N.
Let A=∑4nk=1zkAk when (z1,…,z4n) is selected to locally maximize the fitness level. Then my empirical calculations show that there is some λ where λ∑4nk=1zkAkis positive semidefinite with eigenvalues {0,…,n} and where the eigenvalue k has multiplicity (nk) which is the binomial coefficient. But these are empirical calculations for select values λ; I have not been able to mathematically prove that this is always the case for all local maxima for the fitness level (I have not tried to come up with a proof).
Here, we have obtained a complete characterization of A up-to-unitary equivalence due to the spectral theorem, so we are quite close to completely interpreting the local maximum for our fitness function.
I made a few YouTube videos showcasing the process of maximizing the fitness level here.
Spectra of 1 dimensional LSRDRs of 6 qubit noise channel during training
Spectra of 1 dimensional LSRDRs of 7 qubit noise channel during training
Spectra of 1 dimensional LSRDRs of 8 qubit noise channel during training
I will make another post soon about more LSRDRs of a higher dimension of the same channel N.