People on LessWrong aren’t driven by motivation to do useful work, but the craving to read something amusing or say something smart and witty.
You are asking them to do useful work by giving you important advice, which requires more self control than people here have.
Maybe instead of asking for feedback on your entire framework, it will be motivationally easier for them if you divide it into smaller bite sized Question Posts, and ask one every few days?
You can always hide background information and context in collapsible sections.
You can use multiple collapsible sections, one for each background info topic, so people can skip the ones which they already know about and which bore them.
Anything you say outside a collapsible section should not refer to anything inside a collapsible section, which forces people to read the collapsible section and defeats the purpose.
An alternative to collapsible sections is linking to you previous posts, but that only works if your previous posts fit well with your current post.
Typo
“What AI Safety Seeks to Prevent: Human well-being, societal values, fundamental rights, and environmental integrity.”
Maybe change “prevent” to “protect.” Grumpy old users discriminate against new users, and a mere typo near the start can “confirm their suspicions” this is another low quality post.
Writing like a lawyer
Lawyers and successful bloggers/authors have the opposite instincts: lawyers try to be as thorough as possible while successful bloggers try to convey a message in as few words as possible.
People’s attention spans vary dramatically when the topic is something cool and amusing, but I my vague opinion is that important policy work is necessarily a little less cool.
I could be completely wrong. I haven’t succeeded in writing good posts either. So please don’t take my advice too seriously! I forgot to give this disclaimer last time.
Random note: LessWrong has its internal jargon, where they talk about “AI Notkilleveryoneism.”
The reason is that the words “AI safety” and “AI alignment” has been heavily abused by organizations doing Safetywashing. See some of the discussion here.[1]
I’m not saying you should adopt the term “AI Notkilleveryoneism,” since policymakers might laugh at it. But it doesn’t hurt to learn about this drama.