As nobody else has mentioned it yet in this comment section: AI Safety
Support is a resource-hub
specifically set up to help people get into alignment research
field.
I am a 50 year old independent alignment researcher. I guess I need to mention for the record that I never read the
sequences, and do not plan to. The piece of Yudkowsky writing that I’d recommend everybody interested in alignment should read is Corrigibilty. But in general: read broadly, and also beyond this forum.
I agree with John’s observation that some parts of alignment research are especially
well-suited to independent researchers, because they are about coming up
with new frames/approaches/models/paradigms/etc.
But I would like to add a word of warning. Here are two somewhat
equally valid ways to interpret LessWrong/Alignment Forum:
It is a very big tent that welcomes every new idea
It is a social media hang-out for AI alignment researchers who
prefer to engage with particular alignment sub-problems and particular styles of
doing alignment research only.
So while I agree with John’s call for more independent researchers
developing good new ideas, I need to warn you that your good new ideas
may not automatically trigger a lot of interest or feedback on this
forum. Don’t tie your sense of self-worth too strongly to this
forum.
On avoiding bullshit: discussion on this forum are often a lot better than
on some other social media sites, but still Sturgeon’s
law applies.
As nobody else has mentioned it yet in this comment section: AI Safety Support is a resource-hub specifically set up to help people get into alignment research field.
I am a 50 year old independent alignment researcher. I guess I need to mention for the record that I never read the sequences, and do not plan to. The piece of Yudkowsky writing that I’d recommend everybody interested in alignment should read is Corrigibilty. But in general: read broadly, and also beyond this forum.
I agree with John’s observation that some parts of alignment research are especially well-suited to independent researchers, because they are about coming up with new frames/approaches/models/paradigms/etc.
But I would like to add a word of warning. Here are two somewhat equally valid ways to interpret LessWrong/Alignment Forum:
It is a very big tent that welcomes every new idea
It is a social media hang-out for AI alignment researchers who prefer to engage with particular alignment sub-problems and particular styles of doing alignment research only.
So while I agree with John’s call for more independent researchers developing good new ideas, I need to warn you that your good new ideas may not automatically trigger a lot of interest or feedback on this forum. Don’t tie your sense of self-worth too strongly to this forum.
On avoiding bullshit: discussion on this forum are often a lot better than on some other social media sites, but still Sturgeon’s law applies.