Minimal circuits are not quite the same as fastest programs—they have no adaptive computation, so you can’t e.g. memorize a big table but only use part of it. In some sense it’s just one more (relatively extreme) way of making a complexity-speed tradeoff I basically agree that a GLUT is always faster than meta-learning if you have arbitrary adaptive computation.
That said, I don’t think it’s totally right to call a GLUT constant complexity—if you have an n bit input and m bit output, then it takes at least n+m operations to compute the GLUT (in any reasonable low-level model of computation).
There are even more speed-focused methods than minimal circuits. I think the most extreme versions are more like query complexity or communication complexity, which in some sense are just asking how fast you can make your GLUT—can you get away with reading only a small set of input bits? But being totally precise about “fastest” requires being a lot more principled about the model of computation.
Any n-bit hash function will produce collisions when the number of elements in the hash table gets large enough (after the number of possible hashes stored in n bits has been reached) so adding new elements will require rehashing to avoid collisions making GLUT have a logarithmic time complexity in the limit. Meta-learning can also have a constant time complexity for an arbitrarily large number of tasks, but not in the limit, assuming a finite neural network.
A GLUT can have constant time complexity using a hash table, which makes it a lot less clear that metalearning can be faster
Minimal circuits are not quite the same as fastest programs—they have no adaptive computation, so you can’t e.g. memorize a big table but only use part of it. In some sense it’s just one more (relatively extreme) way of making a complexity-speed tradeoff I basically agree that a GLUT is always faster than meta-learning if you have arbitrary adaptive computation.
That said, I don’t think it’s totally right to call a GLUT constant complexity—if you have an n bit input and m bit output, then it takes at least n+m operations to compute the GLUT (in any reasonable low-level model of computation).
There are even more speed-focused methods than minimal circuits. I think the most extreme versions are more like query complexity or communication complexity, which in some sense are just asking how fast you can make your GLUT—can you get away with reading only a small set of input bits? But being totally precise about “fastest” requires being a lot more principled about the model of computation.
Any n-bit hash function will produce collisions when the number of elements in the hash table gets large enough (after the number of possible hashes stored in n bits has been reached) so adding new elements will require rehashing to avoid collisions making GLUT have a logarithmic time complexity in the limit. Meta-learning can also have a constant time complexity for an arbitrarily large number of tasks, but not in the limit, assuming a finite neural network.