I guess the standardised language for framework development fails Turing Test.
The title is the play on words, merging the title of the guidelines authored by The Medical Research Council — “Complex Intervention Development and Evaluation Framework” — (1) and The Economic Forum for AI — “A Blueprint for Equity and Inclusion in Artificial Intelligence” (2). The blog I wrote closely follows the standardised structure for frameworks and guidelines, with specific subheadings that are easy to quote.
“Addressing Uncertainties” is a major requirement in the iterative process of development and refinement of complex intervention. I did not come up with it; it is an agreed-upon requirement in high-risk health application and research.
Would you like to engage with the content of the post? I thought LessWrong is about engaging in a debate where people learn and attempt to reach consensus.
My apologies. I’m usually right when I guess that a post has been authored by AI, but it appears you really are a native speaker of one of the academic idioms that AIs have also mastered.
As for the essay itself, it involves an aspect of AI safety or AI policy that I have neglected, namely, the management of socially embedded AI systems. I have personally neglected this in favor of SF-flavored topics like “superalignment” because I regard the era in which AIs and humans have a coexistence in which humans still have the upper hand as a very temporary thing. Nonetheless, we are still in that era right now, and hopefully some of the people working within that frame, will read your essay and comment. I do agree that the public health paradigm seems like a reasonable source of ideas, for the reasons that you give.
Some parts of this do sound a lot like LLM output:
“Complex Intervention Development and Evaluation Framework: A Blueprint for Ethical and Responsible AI Development and Evaluation”
“Addressing Uncertainties”
Many people who post LLM-generated content on LessWrong often wrote it themselves in their native language and had an LLM translate it, so it’s not a crazy prior, though I don’t see any additional reason to have guessed that here.
Having read more of the post now, I do believe it was at least mostly human-written (without this being a claim that it was at least partially written by an LLM). It’s not obvious that it’s particular relevant to LessWrong. The advice on the old internet was “lurk more”; now we show users warnings like this when they’re writing their first post.
@Mitchell_Porter What made you think that I am not a native English speaker and what made you think that this post was written by AI?
@RobertM @Mitchell_Porter.
I guess the standardised language for framework development fails Turing Test.
The title is the play on words, merging the title of the guidelines authored by The Medical Research Council — “Complex Intervention Development and Evaluation Framework” — (1) and The Economic Forum for AI — “A Blueprint for Equity and Inclusion in Artificial Intelligence” (2). The blog I wrote closely follows the standardised structure for frameworks and guidelines, with specific subheadings that are easy to quote.
“Addressing Uncertainties” is a major requirement in the iterative process of development and refinement of complex intervention. I did not come up with it; it is an agreed-upon requirement in high-risk health application and research.
Would you like to engage with the content of the post? I thought LessWrong is about engaging in a debate where people learn and attempt to reach consensus.
My apologies. I’m usually right when I guess that a post has been authored by AI, but it appears you really are a native speaker of one of the academic idioms that AIs have also mastered.
As for the essay itself, it involves an aspect of AI safety or AI policy that I have neglected, namely, the management of socially embedded AI systems. I have personally neglected this in favor of SF-flavored topics like “superalignment” because I regard the era in which AIs and humans have a coexistence in which humans still have the upper hand as a very temporary thing. Nonetheless, we are still in that era right now, and hopefully some of the people working within that frame, will read your essay and comment. I do agree that the public health paradigm seems like a reasonable source of ideas, for the reasons that you give.
Not Mitchell, but at a guess:
LLMs really like lists
Some parts of this do sound a lot like LLM output:
“Complex Intervention Development and Evaluation Framework: A Blueprint for Ethical and Responsible AI Development and Evaluation”
“Addressing Uncertainties”
Many people who post LLM-generated content on LessWrong often wrote it themselves in their native language and had an LLM translate it, so it’s not a crazy prior, though I don’t see any additional reason to have guessed that here.
Having read more of the post now, I do believe it was at least mostly human-written (without this being a claim that it was at least partially written by an LLM). It’s not obvious that it’s particular relevant to LessWrong. The advice on the old internet was “lurk more”; now we show users warnings like this when they’re writing their first post.