Google Maps as an oracle with very little overhead
To me LLM’s under iteration look like Oracles, and I whenever I look at any intelligent system (including humans), it just looks like there is an Oracle at the heart of it.
Not an ideal Oracle than can answer anything, but an Oracle than does it best and in all biological system it learns continuously.
The fact that “do it step by step” made LLM’s much better, that apparently came as a surprise to some, but if you look at it like an Oracle[1], it makes a lot of sense (IMO)
I think you sort of hit it when you wrote
To me LLM’s under iteration look like Oracles, and I whenever I look at any intelligent system (including humans), it just looks like there is an Oracle at the heart of it.
Not an ideal Oracle than can answer anything, but an Oracle than does it best and in all biological system it learns continuously.
The fact that “do it step by step” made LLM’s much better, that apparently came as a surprise to some, but if you look at it like an Oracle[1], it makes a lot of sense (IMO)
The inner loop would be fllm:c⇀t∈T
Where c is the context windows (1-N tokens), t is the output token (whatever we select) from the total possible set of tokens T.
We append t to c and do fllm again.
And somehow that looks like an Oracle foracle:q⇀s∈S where q is the question and s in the solution pulled from the set of all possible solutions S.
Obviously LLM’s has limited reach into S, but that really seems to be because of limits to c and the fact that fllm is frozen (parameters are frozen).