This is why I love LessWrong.
Thanks, I’m saving this if I ever get any symptoms, and I’m also considering taking some mebendazole as a purely preventative measure (I don’t remember that I’ve ever taken it, and I’ve been exposed to dirty work for years and years now).
I agree, and am also confused with the idea that LLMs will be able to bootstrap something more intelligent.
My day job is a technical writer. I also do a bit of DevOps stuff. This combo ought to be the most LLM-able of all, yet I frequently find myself giving up on trying to tease out an answer from an LLM. And I’m far from the edge of my field!
So how exactly do people on the edge of their field make better use of LLMs, and expect to make qualitative improvements?
Feels like it’ll have to be humans to make algorithmic improvements, at least up until a point.