I mean, I’d say that’s mostly meant for irony—when even an LLM can poke legitimate holes in your argument, it’s less of an argument and more of a generic attempt at copium...
I disagree with this: LLMs seem capable to be ~equally good at arguing for false and true positions, if you ask them (as evidenced by the many incorrect proofs produced by Galactica) when asked for it.
Of course, but if not asked, they will generally come up with the most standard answer to something. Again, I didn’t take that bit as deferring to ChatGPT as some kind of authority, but rather as a “even a stupid LLM can immediately come up with the obvious criticism that you can immediately recognise as correct”.
I mean, I’d say that’s mostly meant for irony—when even an LLM can poke legitimate holes in your argument, it’s less of an argument and more of a generic attempt at copium...
I disagree with this: LLMs seem capable to be ~equally good at arguing for false and true positions, if you ask them (as evidenced by the many incorrect proofs produced by Galactica) when asked for it.
Of course, but if not asked, they will generally come up with the most standard answer to something. Again, I didn’t take that bit as deferring to ChatGPT as some kind of authority, but rather as a “even a stupid LLM can immediately come up with the obvious criticism that you can immediately recognise as correct”.