I think one of the key intuitions here is that in a high dimensionality problem, random babbling takes far too long to solve the problem, as the computational complexity of random babbling is 2^n. If n is say over 100, then it requires more random ideas than anyone will make in a million years.
Given that most real world problems are high dimensional, babbling will lead you nowhere to the solution.
Yeah, but the random babbling isn’t solving the problem here, it’s used as random seeds to improve your own thought-generator’s ability to explore. Like, consider cognition as motion through the mental landscape. Once a motion is made in some direction, human minds’ negative creativity means that they’re biased towards continuing to move in the same direction. There’s a very narrow “cone” of possible directions in which we can proceed from a given point, we can’t stop and do a turn in an arbitrary direction. LLMs’ babble, in this case, is meant to increase the width of that cone by adding entropy to our “cognitive aim”, let us make sharper turns.
In this frame, the human is still doing all the work: they’re the ones picking the ultimate direction and making the motions, the babble just serves as vague inspiration.
I think one of the key intuitions here is that in a high dimensionality problem, random babbling takes far too long to solve the problem, as the computational complexity of random babbling is 2^n. If n is say over 100, then it requires more random ideas than anyone will make in a million years.
Given that most real world problems are high dimensional, babbling will lead you nowhere to the solution.
Yeah, but the random babbling isn’t solving the problem here, it’s used as random seeds to improve your own thought-generator’s ability to explore. Like, consider cognition as motion through the mental landscape. Once a motion is made in some direction, human minds’ negative creativity means that they’re biased towards continuing to move in the same direction. There’s a very narrow “cone” of possible directions in which we can proceed from a given point, we can’t stop and do a turn in an arbitrary direction. LLMs’ babble, in this case, is meant to increase the width of that cone by adding entropy to our “cognitive aim”, let us make sharper turns.
In this frame, the human is still doing all the work: they’re the ones picking the ultimate direction and making the motions, the babble just serves as vague inspiration.
Or maybe all of that is overly abstract nonsense.