Conditional on LLMs scaling to AGI, I feel like it’s a contradiction to say that “LLMs offer little or negative utility AND it’s going to stay this way”. My model is that we are either dying in a couple years to LLMs getting us to AGI, and we are going to have a year or two or of AIs that can provide incredible utility, or we are not dying to LLMs and the timelines are longer.
I think I read somewhere that you don’t believe LLMs will get us to AGI, so this might already be implicit in your model? I personally am putting at least some credence on the ai-2027 model, which predicts superhuman coders in the near future. (Not saying that I believe this is the most probable future, just that I find it convincing enough that I want to be prepared for it.)
Up until recently I was in the “LLMs offer zero utility” camp (for coding), but now at work we have a Cursor plan (still would not pay for it for personal use probably), and with a lot of trial and error I feel like I am finding the kinds of tasks where AIs can offer a bit of utility, and I am slowly moving towards the “marginal utility” camp.
One kind of thing I like using it for is small scripts to automate bits of my workflow. E.g. I have an idea for a script, I know it would take me 30m-1h to implement it, but it’s not worth it because e.g. it would only save me a few seconds each time. But if I can reduce the time investment to only a few minutes by giving the task to the LLM, it can suddenly be worth it.
I would be interested in other people’s experiences with the negative side effects of LLM use. What are the symptoms/warning signs of “LLM brain rot”? I feel like with my current use I am relatively well equipped to avoid that:
I only ask things from LLMs that I know I could solve in a few hours tops.
I code review the result, tell it if it did something stupid.
90% of my job is stuff that is currently not close to being LLM automatable anyway.
This “deductive closure” concept feels way too powerful to me. This is even hinted at later in the conversation talking about mathematical proofs, but I don’t think it’s taken to its full conclusion: such a deductive closure would contain all provable mathematical statements, which I am skeptical even an ASI could achieve.[1]
To spell this out more precisely: the deductive closure of just “the set theory axioms” would be “all of mathematics”, including (dis)proofs for all our currently unproven conjectures[2] (e.g. P ≠ NP), and all possible mathematical statements.[2]
Well, as long as we want to stick to some reasonable algorithmic complexity. Otherwise the “just try all possible proofs in sequence” algorithm is something we already have and works perfectly.
Well, as long as they are not undecidable.