I guess the standardised language for framework development fails Turing Test.
The title is the play on words, merging the title of the guidelines authored by The Medical Research Council — “Complex Intervention Development and Evaluation Framework” — (1) and The Economic Forum for AI — “A Blueprint for Equity and Inclusion in Artificial Intelligence” (2). The blog I wrote closely follows the standardised structure for frameworks and guidelines, with specific subheadings that are easy to quote.
“Addressing Uncertainties” is a major requirement in the iterative process of development and refinement of complex intervention. I did not come up with it; it is an agreed-upon requirement in high-risk health application and research.
Would you like to engage with the content of the post? I thought LessWrong is about engaging in a debate where people learn and attempt to reach consensus.
My apologies. I’m usually right when I guess that a post has been authored by AI, but it appears you really are a native speaker of one of the academic idioms that AIs have also mastered.
As for the essay itself, it involves an aspect of AI safety or AI policy that I have neglected, namely, the management of socially embedded AI systems. I have personally neglected this in favor of SF-flavored topics like “superalignment” because I regard the era in which AIs and humans have a coexistence in which humans still have the upper hand as a very temporary thing. Nonetheless, we are still in that era right now, and hopefully some of the people working within that frame, will read your essay and comment. I do agree that the public health paradigm seems like a reasonable source of ideas, for the reasons that you give.
@RobertM @Mitchell_Porter.
I guess the standardised language for framework development fails Turing Test.
The title is the play on words, merging the title of the guidelines authored by The Medical Research Council — “Complex Intervention Development and Evaluation Framework” — (1) and The Economic Forum for AI — “A Blueprint for Equity and Inclusion in Artificial Intelligence” (2). The blog I wrote closely follows the standardised structure for frameworks and guidelines, with specific subheadings that are easy to quote.
“Addressing Uncertainties” is a major requirement in the iterative process of development and refinement of complex intervention. I did not come up with it; it is an agreed-upon requirement in high-risk health application and research.
Would you like to engage with the content of the post? I thought LessWrong is about engaging in a debate where people learn and attempt to reach consensus.
My apologies. I’m usually right when I guess that a post has been authored by AI, but it appears you really are a native speaker of one of the academic idioms that AIs have also mastered.
As for the essay itself, it involves an aspect of AI safety or AI policy that I have neglected, namely, the management of socially embedded AI systems. I have personally neglected this in favor of SF-flavored topics like “superalignment” because I regard the era in which AIs and humans have a coexistence in which humans still have the upper hand as a very temporary thing. Nonetheless, we are still in that era right now, and hopefully some of the people working within that frame, will read your essay and comment. I do agree that the public health paradigm seems like a reasonable source of ideas, for the reasons that you give.