The timing of this post is quite serendipitous for me. Much of what you wrote resonates heavily. First comment on LW, by the way!
I’m deeply interested in the technical problems of alignment and have recently read through the AI Safety Fundamentals Course. I’m looking for any opportunity to discuss these ideas with others. I’ve been adjacent to the rationalist community for a few years (a few friends, EA, ACX, rationalism, etc.), but the need to sanity check my own thoughts on alignment has made engaging with the LW community seem invaluable.
I have found the barrier to entry seems high from a career perspective. Despite how I’d like to spend my time, my day job limits the amount of focused hours I can commit to upskilling in this domain, and a community of people in similar positions would be invaluable. I’m more than willing to self-study and do independent research, but I’m eager for some guidance so I can appropriately goal set.
If and only if the members have publicly provided the link, would you mind sharing any resources you find where groups may be working on particular problems?
Thanks so much for the thoughtful response. I’ll certainly reach out and try to participate in BlueDot Impact’s course now that I’m more familiar with the content, and will stay on the lookout for anything you document as you go through your own journey! Even just a few of the names and resources so far have been incredibly valuable pointers to the right corners of the internet.
I don’t have karma yet, but if I did, I’d gladly open my wallet :)