Thank you! This helps me a lot. I will hide the bits about the AI Act in collapsible sections, and I will correct this typo.
One thing I’ve noticed though: most “successful” posts in LW are quite long and detailed, almost paper-length. I thought that by making my post shorter, I may lose nuance.
People’s attention spans vary dramatically when the topic is something cool and amusing, but I my vague opinion is that important policy work is necessarily a little less cool.
I could be completely wrong. I haven’t succeeded in writing good posts either. So please don’t take my advice too seriously! I forgot to give this disclaimer last time.
Random note: LessWrong has its internal jargon, where they talk about “AI Notkilleveryoneism.”
The reason is that the words “AI safety” and “AI alignment” has been heavily abused by organizations doing Safetywashing. See some of the discussion here.[1]
Policy work is 100% less cool XD. But it should be concerning for us all that a vast majority of policy makers I’ve talked to did not even know that such thing as “mechanistic interpretability” exists, and think that alignment is some sort of security ideal…
Hmm! If you talk to policymakers face to face, that is something you can leverage to get LessWrong folks interested in you!
How many have you talked to, and how many will you talk to, and on what level are they on?
You might make a short Question Post highlighting that you’re a lawyer talking to some policymakers. Even if it’s a small number and they are low level, a little bit is far better than nothing.
Then you might ask an open-ended question, “what should I say to them?”
And you can include what you are currently saying to them in a collapsed sections or link to previous posts. Maybe summaries of past conversations can be put in collapsed sections too.
I’m not sure. I’m definitely not an expert in good LW posts, but my intuition is this one might get a better response.
My feeling is a ton of people on LessWrong are dying to make their message known to policymakers but they are falling on deaf ears. (Arguably, I am one of them, I once wrote this, and got ignored when I cold-emailed policymakers)
Someone who actually talks to policymakers (albeit European ones… haha) would be the most welcome.
“Albeit European ones” I laughed so much hahaha. Sorry to dissapoint XD. Yes, mainly EU and UK based. Members of the European Commission’s expert panel (I am a member too but I only joined very recently) and influential “think tanks” here in europe that provide feedback on regulatory initiatives, like the GPAI Codes of Practice
I will read your post, btw! I am sick of shallow AI Risk statements based on product safety legislations that do not account for the evolving, unpredictable nature of AI risk. Oh well.
I will gather more ideas and will post a Quick take as you’ve advised, that was a great idea, thank you!
I think you are very modest, and have a tendency to undersell the influence you have. Don’t do that in your quick take or post, make it clear from the beginning what positions you are in, and who you get to interact with :D
Thank you! This helps me a lot. I will hide the bits about the AI Act in collapsible sections, and I will correct this typo.
One thing I’ve noticed though: most “successful” posts in LW are quite long and detailed, almost paper-length. I thought that by making my post shorter, I may lose nuance.
People’s attention spans vary dramatically when the topic is something cool and amusing, but I my vague opinion is that important policy work is necessarily a little less cool.
I could be completely wrong. I haven’t succeeded in writing good posts either. So please don’t take my advice too seriously! I forgot to give this disclaimer last time.
Random note: LessWrong has its internal jargon, where they talk about “AI Notkilleveryoneism.”
The reason is that the words “AI safety” and “AI alignment” has been heavily abused by organizations doing Safetywashing. See some of the discussion here.[1]
I’m not saying you should adopt the term “AI Notkilleveryoneism,” since policymakers might laugh at it. But it doesn’t hurt to learn about this drama.
Policy work is 100% less cool XD. But it should be concerning for us all that a vast majority of policy makers I’ve talked to did not even know that such thing as “mechanistic interpretability” exists, and think that alignment is some sort of security ideal…
So what I am doing here may be a necessary evil.
Hmm! If you talk to policymakers face to face, that is something you can leverage to get LessWrong folks interested in you!
How many have you talked to, and how many will you talk to, and on what level are they on?
You might make a short Question Post highlighting that you’re a lawyer talking to some policymakers. Even if it’s a small number and they are low level, a little bit is far better than nothing.
Then you might ask an open-ended question, “what should I say to them?”
And you can include what you are currently saying to them in a collapsed sections or link to previous posts. Maybe summaries of past conversations can be put in collapsed sections too.
I’m not sure. I’m definitely not an expert in good LW posts, but my intuition is this one might get a better response.
My feeling is a ton of people on LessWrong are dying to make their message known to policymakers but they are falling on deaf ears. (Arguably, I am one of them, I once wrote this, and got ignored when I cold-emailed policymakers)
Someone who actually talks to policymakers (albeit European ones… haha) would be the most welcome.
“Albeit European ones” I laughed so much hahaha. Sorry to dissapoint XD. Yes, mainly EU and UK based. Members of the European Commission’s expert panel (I am a member too but I only joined very recently) and influential “think tanks” here in europe that provide feedback on regulatory initiatives, like the GPAI Codes of Practice
I will read your post, btw! I am sick of shallow AI Risk statements based on product safety legislations that do not account for the evolving, unpredictable nature of AI risk. Oh well.
I will gather more ideas and will post a Quick take as you’ve advised, that was a great idea, thank you!
:) that’s great.
I think you are very modest, and have a tendency to undersell the influence you have. Don’t do that in your quick take or post, make it clear from the beginning what positions you are in, and who you get to interact with :D