I do not see any evidence that large language models are equipped to understand the structure behind prime numbers. But transformers along with other machine learning tools should be well-equipped to investigate other mathematical structures. In particular, I am thinking about the mathematical structures called Laver-like algebras that I have been researching on and off since about 2015.
I have developed an algorithm that is capable of producing new Laver-like algebras from old ones. From every Laver-like algebra, one can generate a sequence of non-commutative polynomials. From these non-commutative polynomials, one can recover the entire Laver-like algebra up to critical equivalence. Therefore, to use deep learning to investigate the structure of Laver-like algebras, we need a few things.
We need a way of transforming the non-commutative polynomials into something that further layers of AI can work with. I suggest using my notion of an LSRDR that I developed for cryptographic purposes to transform these non-commutative polynomials into collections of matrices. These matrices are often (but not always) unique up-to-a constant scalar factor and orthogonal/​unitary equivalence. But this is simply what I have come up with using my own methods. There are likely other ways that transform non-commutative polynomials into vectors that I have simply not thought of.
We need a way of solving a machine learning problem from a sequence of vectors or matrices as input. This is where we can use transformers.
We need a problem or problems to solve along with an algorithm for deciding which Laver-like algebras to go through next. One such problem would be to estimate the number of Laver-like algebras exist that satisfy certain conditions. Another such problem would be to determine whether an algebra arises from the rank-into-rank embeddings or not. Since transformers are good at working with a variety of problems, we may be able to train transformers to do a variety of tasks related to Laver-like algebras.
Let me know if you want more detailed information about this proposal.
Feel free to reach me via email. However, I must note that Sasha and myself are currently oriented towards existing projects of the Simons Collaboration on Arithmetic Geometry, Number Theory, and Computation.
If your research proposal may be formulated from the vantage point of that research program, that would improve the odds of a collaboration in the medium term.
I do not see any evidence that large language models are equipped to understand the structure behind prime numbers. But transformers along with other machine learning tools should be well-equipped to investigate other mathematical structures. In particular, I am thinking about the mathematical structures called Laver-like algebras that I have been researching on and off since about 2015.
I have developed an algorithm that is capable of producing new Laver-like algebras from old ones. From every Laver-like algebra, one can generate a sequence of non-commutative polynomials. From these non-commutative polynomials, one can recover the entire Laver-like algebra up to critical equivalence. Therefore, to use deep learning to investigate the structure of Laver-like algebras, we need a few things.
We need a way of transforming the non-commutative polynomials into something that further layers of AI can work with. I suggest using my notion of an LSRDR that I developed for cryptographic purposes to transform these non-commutative polynomials into collections of matrices. These matrices are often (but not always) unique up-to-a constant scalar factor and orthogonal/​unitary equivalence. But this is simply what I have come up with using my own methods. There are likely other ways that transform non-commutative polynomials into vectors that I have simply not thought of.
We need a way of solving a machine learning problem from a sequence of vectors or matrices as input. This is where we can use transformers.
We need a problem or problems to solve along with an algorithm for deciding which Laver-like algebras to go through next. One such problem would be to estimate the number of Laver-like algebras exist that satisfy certain conditions. Another such problem would be to determine whether an algebra arises from the rank-into-rank embeddings or not. Since transformers are good at working with a variety of problems, we may be able to train transformers to do a variety of tasks related to Laver-like algebras.
Let me know if you want more detailed information about this proposal.
Feel free to reach me via email. However, I must note that Sasha and myself are currently oriented towards existing projects of the Simons Collaboration on Arithmetic Geometry, Number Theory, and Computation.
If your research proposal may be formulated from the vantage point of that research program, that would improve the odds of a collaboration in the medium term.