Some parts of this do sound a lot like LLM output:
“Complex Intervention Development and Evaluation Framework: A Blueprint for Ethical and Responsible AI Development and Evaluation”
“Addressing Uncertainties”
Many people who post LLM-generated content on LessWrong often wrote it themselves in their native language and had an LLM translate it, so it’s not a crazy prior, though I don’t see any additional reason to have guessed that here.
Having read more of the post now, I do believe it was at least mostly human-written (without this being a claim that it was at least partially written by an LLM). It’s not obvious that it’s particular relevant to LessWrong. The advice on the old internet was “lurk more”; now we show users warnings like this when they’re writing their first post.
Not Mitchell, but at a guess:
LLMs really like lists
Some parts of this do sound a lot like LLM output:
“Complex Intervention Development and Evaluation Framework: A Blueprint for Ethical and Responsible AI Development and Evaluation”
“Addressing Uncertainties”
Many people who post LLM-generated content on LessWrong often wrote it themselves in their native language and had an LLM translate it, so it’s not a crazy prior, though I don’t see any additional reason to have guessed that here.
Having read more of the post now, I do believe it was at least mostly human-written (without this being a claim that it was at least partially written by an LLM). It’s not obvious that it’s particular relevant to LessWrong. The advice on the old internet was “lurk more”; now we show users warnings like this when they’re writing their first post.