Perhaps it is best to develop AI systems that we can prove theorems about in the first place. AI systems that we can prove theorems about are more likely to be interpretable anyways. Fortunately, there are quite a few theorems about maxima and minima of functions including uniqueness theorems including the following.
Theorem: (maximum principle) If C⊆Rn is a compact set, and f:C→[−∞,∞) is an upper semicontinuous function that is subharmonic on the interior C∘, then maxx∈Cf(x)=maxx∈∂Cf(x).
If U is a bounded domain, and f:U→R is a C1-function, then define the Dirichlet energy of f as
E(f)=12⋅∫U∥(∇f)(x)∥2dx.
Theorem: (Dirichlet principle) If C is a compact subset of Rn, then whenever g:∂C→R is continuous and f:C→R is a continuous function which is harmonic on C∘ and where f|∂C=g, then f is the C2-function that minimizes the Dirichlet energy E(f) subject to the condition that f|∂C=g.
Theorem: (J Van Name: A version of the Cauchy-Schwarz inequality) Suppose that A1,…,Ar∈Mn(C) which have no invariant subspace and A1A∗1+⋯+ArA∗r=1n. Suppose furthermore that L:Mn(C)→R is the partial function where L(X1,…,Xr)=ρ(A1⊗(X∗1)T+⋯+Ar⊗(X∗r)T)ρ(X1⊗(X∗1)T+⋯+Xr⊗(X∗r)T)1/2. Then L(X1,…,Xr)≤1 and L(X1,…,Xr)=1 if and only if there is some non-zero complex number λ and invertible matrix C with Xj=λCAjC−1 for 1≤j≤r. Here, ρ(X) denotes the spectral radius of a matrix X.
Right now, there are some AI systems related to the above theorems. For example, if we are given a graph, then the smallest eigenvalues of the Laplacian of a graph form clusters in your graph, so spectral techniques can be used to analyze graphs, but I am working on developing spectral AI systems with more advanced capabilities. It will take some time for spectral AI systems to catch up with deep neural networks while retaining their mathematical properties (such as fitness functions with an almost unique local maximum), but it is plausible that the development of spectral AI systems is mostly a matter of engineering and hard work rather than a problem of possibility, but I am not sure whether spectral AI systems will be as energy efficient as neural networks once well developed.
Perhaps it is best to develop AI systems that we can prove theorems about in the first place. AI systems that we can prove theorems about are more likely to be interpretable anyways. Fortunately, there are quite a few theorems about maxima and minima of functions including uniqueness theorems including the following.
Theorem: (maximum principle) If C⊆Rn is a compact set, and f:C→[−∞,∞) is an upper semicontinuous function that is subharmonic on the interior C∘, then maxx∈Cf(x)=maxx∈∂Cf(x).
If U is a bounded domain, and f:U→R is a C1-function, then define the Dirichlet energy of f as
E(f)=12⋅∫U∥(∇f)(x)∥2dx.
Theorem: (Dirichlet principle) If C is a compact subset of Rn, then whenever g:∂C→R is continuous and f:C→R is a continuous function which is harmonic on C∘ and where f|∂C=g, then f is the C2-function that minimizes the Dirichlet energy E(f) subject to the condition that f|∂C=g.
Theorem: (J Van Name: A version of the Cauchy-Schwarz inequality) Suppose that A1,…,Ar∈Mn(C) which have no invariant subspace and A1A∗1+⋯+ArA∗r=1n. Suppose furthermore that L:Mn(C)→R is the partial function where L(X1,…,Xr)=ρ(A1⊗(X∗1)T+⋯+Ar⊗(X∗r)T)ρ(X1⊗(X∗1)T+⋯+Xr⊗(X∗r)T)1/2. Then L(X1,…,Xr)≤1 and L(X1,…,Xr)=1 if and only if there is some non-zero complex number λ and invertible matrix C with Xj=λCAjC−1 for 1≤j≤r. Here, ρ(X) denotes the spectral radius of a matrix X.
Right now, there are some AI systems related to the above theorems. For example, if we are given a graph, then the smallest eigenvalues of the Laplacian of a graph form clusters in your graph, so spectral techniques can be used to analyze graphs, but I am working on developing spectral AI systems with more advanced capabilities. It will take some time for spectral AI systems to catch up with deep neural networks while retaining their mathematical properties (such as fitness functions with an almost unique local maximum), but it is plausible that the development of spectral AI systems is mostly a matter of engineering and hard work rather than a problem of possibility, but I am not sure whether spectral AI systems will be as energy efficient as neural networks once well developed.