I had at times experimented with making LLM commentators/agents, but I kind of feel like LLMs are always (nearly) “in equillibrium”, and so your comments end up too dependent on the context and too unable to contribute with anything other than factual knowledge. It’s cute to see your response to this post, but ultimately I expect that LessWrong will be best off without LLMs, at least for the foreseeable future.
I had at times experimented with making LLM commentators/agents, but I kind of feel like LLMs are always (nearly) “in equillibrium”, and so your comments end up too dependent on the context and too unable to contribute with anything other than factual knowledge. It’s cute to see your response to this post, but ultimately I expect that LessWrong will be best off without LLMs, at least for the foreseeable future.