The timing of this post is quite serendipitous for me. Much of what you wrote resonates heavily. First comment on LW, by the way!
I’m deeply interested in the technical problems of alignment and have recently read through the AI Safety Fundamentals Course. I’m looking for any opportunity to discuss these ideas with others. I’ve been adjacent to the rationalist community for a few years (a few friends, EA, ACX, rationalism, etc.), but the need to sanity check my own thoughts on alignment has made engaging with the LW community seem invaluable.
I have found the barrier to entry seems high from a career perspective. Despite how I’d like to spend my time, my day job limits the amount of focused hours I can commit to upskilling in this domain, and a community of people in similar positions would be invaluable. I’m more than willing to self-study and do independent research, but I’m eager for some guidance so I can appropriately goal set.
If and only if the members have publicly provided the link, would you mind sharing any resources you find where groups may be working on particular problems?
Glad to hear that my post is resonating with some people!
I definitely understand the difficulty regarding time allocation when also working a full time job. As I gather resources and connections I will definitely make sure to spread awareness of them.
One thing to note, though, is that I found the more passive approach of waiting until I find an opportunity to be much less effective than forging opportunities myself (even though I was spending a significant amount of time looking for those opportunities).
A specific and more detailed recommendation for how to do this is going to be highly dependent on your level of experience with ML and time availability. My more general recommendation would be to apply to be in a cohort of BlueDot Impact’s AI Governance or AI Safety Fundamentals courses (I believe that the application for the early 2024 session of the AI Safety Fundamentals course is currently open). Taking a course like this provides opportunities to gain connections, which can be leveraged into independent projects/efforts. I found that the AI Governance session was very doable with a full time position (when I started it, I was still full time at my current job). Although I cannot definitely say the same for the AI Safety Fundamentals course, as I did not complete it through a formal session (and instead just did the readings independently), it seems to be a similar time commitment. I think that taking the course with a cohort would definitely be valuable, even for those that have completed the readings independently.
Thanks so much for the thoughtful response. I’ll certainly reach out and try to participate in BlueDot Impact’s course now that I’m more familiar with the content, and will stay on the lookout for anything you document as you go through your own journey! Even just a few of the names and resources so far have been incredibly valuable pointers to the right corners of the internet.
I don’t have karma yet, but if I did, I’d gladly open my wallet :)
The timing of this post is quite serendipitous for me. Much of what you wrote resonates heavily. First comment on LW, by the way!
I’m deeply interested in the technical problems of alignment and have recently read through the AI Safety Fundamentals Course. I’m looking for any opportunity to discuss these ideas with others. I’ve been adjacent to the rationalist community for a few years (a few friends, EA, ACX, rationalism, etc.), but the need to sanity check my own thoughts on alignment has made engaging with the LW community seem invaluable.
I have found the barrier to entry seems high from a career perspective. Despite how I’d like to spend my time, my day job limits the amount of focused hours I can commit to upskilling in this domain, and a community of people in similar positions would be invaluable. I’m more than willing to self-study and do independent research, but I’m eager for some guidance so I can appropriately goal set.
If and only if the members have publicly provided the link, would you mind sharing any resources you find where groups may be working on particular problems?
Glad to hear that my post is resonating with some people!
I definitely understand the difficulty regarding time allocation when also working a full time job. As I gather resources and connections I will definitely make sure to spread awareness of them.
One thing to note, though, is that I found the more passive approach of waiting until I find an opportunity to be much less effective than forging opportunities myself (even though I was spending a significant amount of time looking for those opportunities).
A specific and more detailed recommendation for how to do this is going to be highly dependent on your level of experience with ML and time availability. My more general recommendation would be to apply to be in a cohort of BlueDot Impact’s AI Governance or AI Safety Fundamentals courses (I believe that the application for the early 2024 session of the AI Safety Fundamentals course is currently open). Taking a course like this provides opportunities to gain connections, which can be leveraged into independent projects/efforts. I found that the AI Governance session was very doable with a full time position (when I started it, I was still full time at my current job). Although I cannot definitely say the same for the AI Safety Fundamentals course, as I did not complete it through a formal session (and instead just did the readings independently), it seems to be a similar time commitment. I think that taking the course with a cohort would definitely be valuable, even for those that have completed the readings independently.
Thanks so much for the thoughtful response. I’ll certainly reach out and try to participate in BlueDot Impact’s course now that I’m more familiar with the content, and will stay on the lookout for anything you document as you go through your own journey! Even just a few of the names and resources so far have been incredibly valuable pointers to the right corners of the internet.
I don’t have karma yet, but if I did, I’d gladly open my wallet :)