There’s an improvement in LLM’s I’ve seen that is important but has wildly inflated people’s expectations beyond what’s reasonable:
LLM’s have hit a point in some impressive tests where they don’t reliably fail past the threshold of being unrecoverable. They are conservative enough that they can do search on a problem, fail a million times until they mumble into an answer.
I’m going to try writing something of at least not-embarrassing quality about my thoughts on this but I am really confused by people’s hype around this sort of thing, this feels like directed randomness
No, sorry, that’s not a typo that’s a linguistic norm that i probably assumed was more common than it actually is
Me and the people I talk with have used the words “mumble” and “babble” to describe LLM reasoning. Sort of like human babble, see https://www.lesswrong.com/posts/i42Dfoh4HtsCAfXxL/babble