Disclaimer: Haven’t actually tried this myself yet, naked theorizing.
“We made a wrapper for an LLM so you can use it to babble random ideas!”
I’d like to offer a steelman of that idea. Humans have negative creativity — it takes conscious effort to come up with novel spins on what you’re currently thinking about. An LLM babbling about something vaguely related to your thought process can serve as a source of high-quality noise, noise that is both sufficiently random to spark novel thought processes and relevant enough to prompt novel thoughts on the actual topic you’re thinking about (instead of sending you off in a completely random direction). Tools like Loom seem optimized for that.
It’s nothing a rubber duck or a human conversation partner can’t offer, qualitatively, but it’s more stimulating than the former, and is better than the latter in that it doesn’t take up another human’s time and is always available to babble about what you want.
Not that it’d be a massive boost to productivity, but might lower friction costs on engaging in brainstorming, make it less effortful.
… Or it might degrade your ability to think about the subject matter mechanistically and optimize your ideas in the direction of what sounds like it makes sense semantically. Depends on how seriously you’d be taking the babble, perhaps.
I think one of the key intuitions here is that in a high dimensionality problem, random babbling takes far too long to solve the problem, as the computational complexity of random babbling is 2^n. If n is say over 100, then it requires more random ideas than anyone will make in a million years.
Given that most real world problems are high dimensional, babbling will lead you nowhere to the solution.
Yeah, but the random babbling isn’t solving the problem here, it’s used as random seeds to improve your own thought-generator’s ability to explore. Like, consider cognition as motion through the mental landscape. Once a motion is made in some direction, human minds’ negative creativity means that they’re biased towards continuing to move in the same direction. There’s a very narrow “cone” of possible directions in which we can proceed from a given point, we can’t stop and do a turn in an arbitrary direction. LLMs’ babble, in this case, is meant to increase the width of that cone by adding entropy to our “cognitive aim”, let us make sharper turns.
In this frame, the human is still doing all the work: they’re the ones picking the ultimate direction and making the motions, the babble just serves as vague inspiration.
Disclaimer: Haven’t actually tried this myself yet, naked theorizing.
I’d like to offer a steelman of that idea. Humans have negative creativity — it takes conscious effort to come up with novel spins on what you’re currently thinking about. An LLM babbling about something vaguely related to your thought process can serve as a source of high-quality noise, noise that is both sufficiently random to spark novel thought processes and relevant enough to prompt novel thoughts on the actual topic you’re thinking about (instead of sending you off in a completely random direction). Tools like Loom seem optimized for that.
It’s nothing a rubber duck or a human conversation partner can’t offer, qualitatively, but it’s more stimulating than the former, and is better than the latter in that it doesn’t take up another human’s time and is always available to babble about what you want.
Not that it’d be a massive boost to productivity, but might lower friction costs on engaging in brainstorming, make it less effortful.
… Or it might degrade your ability to think about the subject matter mechanistically and optimize your ideas in the direction of what sounds like it makes sense semantically. Depends on how seriously you’d be taking the babble, perhaps.
I think one of the key intuitions here is that in a high dimensionality problem, random babbling takes far too long to solve the problem, as the computational complexity of random babbling is 2^n. If n is say over 100, then it requires more random ideas than anyone will make in a million years.
Given that most real world problems are high dimensional, babbling will lead you nowhere to the solution.
Yeah, but the random babbling isn’t solving the problem here, it’s used as random seeds to improve your own thought-generator’s ability to explore. Like, consider cognition as motion through the mental landscape. Once a motion is made in some direction, human minds’ negative creativity means that they’re biased towards continuing to move in the same direction. There’s a very narrow “cone” of possible directions in which we can proceed from a given point, we can’t stop and do a turn in an arbitrary direction. LLMs’ babble, in this case, is meant to increase the width of that cone by adding entropy to our “cognitive aim”, let us make sharper turns.
In this frame, the human is still doing all the work: they’re the ones picking the ultimate direction and making the motions, the babble just serves as vague inspiration.
Or maybe all of that is overly abstract nonsense.