Hey Hoagy, thanks for replying, I really appreciate it!
I fixed that link, thanks for pointing it out.
Here is a quick response to some of your points:
My feeling with the posts is that given the diversity of situations for people who are currently AI safety researchers, there’s not likely to be a particular key set of understandings such that a person could walk into the community as a whole and know where they can be helpful.
I tend to feel that things could be much better with little effort. As an analogy, consider the difference between trying to pick a AI safety project to work on now, versus before we had curation and evaluation posts like this.
I’ll note that those posts seem very useful but they are now almost a year out of date and were only ever based on a small set of opinions. It wouldn’t be hard to have something much better.
Similarly, I think that there is room for a lot more of this “coordination work’ here and lots of low-hanging fruit in general.
It’s going to be more like here are the groups and organizations which are doing good work, what roles or other things do they need now, and what would help them scale up their ability to produce useful work.
Relatedly, I think that we should ideally have some sort of community consensus gathering process to figure out what is good and bad movement building (e.g., who are the good/bad groups, and what do the collective set of good groups need).
The shared language stuff and all of what I produced in my post is mainly a means to that end. I really just want to make sure that before I survey the community to understand who wants what and why, there is some sort of standardised understanding and language about movement building so that people don’t just write it off as a particular type of recruitment done without supervision by non-experts.
Hey Hoagy, thanks for replying, I really appreciate it!
I fixed that link, thanks for pointing it out.
Here is a quick response to some of your points:
My feeling with the posts is that given the diversity of situations for people who are currently AI safety researchers, there’s not likely to be a particular key set of understandings such that a person could walk into the community as a whole and know where they can be helpful.
I tend to feel that things could be much better with little effort. As an analogy, consider the difference between trying to pick a AI safety project to work on now, versus before we had curation and evaluation posts like this.
I’ll note that those posts seem very useful but they are now almost a year out of date and were only ever based on a small set of opinions. It wouldn’t be hard to have something much better.
Similarly, I think that there is room for a lot more of this “coordination work’ here and lots of low-hanging fruit in general.
It’s going to be more like here are the groups and organizations which are doing good work, what roles or other things do they need now, and what would help them scale up their ability to produce useful work.
This is exactly what I want to know! From my perspective effective movement builders can increase contributors, contributions, and coordination within the AI Safety community, by starting, sustaining, and scaling useful projects.
Relatedly, I think that we should ideally have some sort of community consensus gathering process to figure out what is good and bad movement building (e.g., who are the good/bad groups, and what do the collective set of good groups need).
The shared language stuff and all of what I produced in my post is mainly a means to that end. I really just want to make sure that before I survey the community to understand who wants what and why, there is some sort of standardised understanding and language about movement building so that people don’t just write it off as a particular type of recruitment done without supervision by non-experts.