A key question when I look at a new user on LessWrong trying to help with AI is, well, are they actually likely to be able to contribute to the field of AI safety?
If they are aiming to make direct novel intellectual contributions, this is in fact fairly hard. People have argued back and forth about how much raw IQ, conscientiousness or other signs of promise a person needs to have. There has been some posts arguing that people are overly pessimistic and gatekeeping-y about AI safety.
But, I think it’s just pretty importantly true that it takes a fairly significant combination of intelligence and dedication to contribute. Not everyone is cut out for doing original research. Many people pre-emptively focus on community building and governance because that feels easier and more tractable to them than original research. But those areas still require you to have a pretty understanding of the field you’re trying to govern or build a community for.
If someone writes a post on AI that seems like a bad take, which isn’t really informed by the real challenges, should I be encouraging that person to make improvements and try again? Or just say “idk man, not everyone is cut out for this?”
Here’s my current answer.
If you’ve written a take on AI that didn’t seem to hit the LW team’s quality bar, I would recommend some combination of:
Read ~16 hours of background content, so you’re not just completely missing the point. (I have some material in mind that I’ll compile later, but for now highlight roughly the amount of effort involved)
Set aside ~4 hours to think seriously about the topic. Try to find one sub-question you don’t know the answer to, and make progress answering that sub-question.
Write up your thoughts as a LW post.
(For each of these steps, organizing some friends to work together as a reading or thinking group can be helpful to make it more fun)
This doesn’t guarantee that you’ll be a good fit for AI safety work, but I think this is an amount of effort where it’s possible for a LW mod to look at your work, and figure out if this is likely to be a good use of your time.
Some people may object “this is a lot of work.” Yes, it is. If you’re the right sort of person you may just find this work fun. But the bottom line is yes, this is work. You should not expect to contribute to the field without putting in serious work, and I’m basically happy to filter out of LessWrong people who a) seem to superficially have pretty confused takes, and b) are unwilling to put in 20 hours of research work.
A key question when I look at a new user on LessWrong trying to help with AI is, well, are they actually likely to be able to contribute to the field of AI safety?
If they are aiming to make direct novel intellectual contributions, this is in fact fairly hard. People have argued back and forth about how much raw IQ, conscientiousness or other signs of promise a person needs to have. There has been some posts arguing that people are overly pessimistic and gatekeeping-y about AI safety.
But, I think it’s just pretty importantly true that it takes a fairly significant combination of intelligence and dedication to contribute. Not everyone is cut out for doing original research. Many people pre-emptively focus on community building and governance because that feels easier and more tractable to them than original research. But those areas still require you to have a pretty understanding of the field you’re trying to govern or build a community for.
If someone writes a post on AI that seems like a bad take, which isn’t really informed by the real challenges, should I be encouraging that person to make improvements and try again? Or just say “idk man, not everyone is cut out for this?”
Here’s my current answer.
If you’ve written a take on AI that didn’t seem to hit the LW team’s quality bar, I would recommend some combination of:
Read ~16 hours of background content, so you’re not just completely missing the point. (I have some material in mind that I’ll compile later, but for now highlight roughly the amount of effort involved)
Set aside ~4 hours to think seriously about the topic. Try to find one sub-question you don’t know the answer to, and make progress answering that sub-question.
Write up your thoughts as a LW post.
(For each of these steps, organizing some friends to work together as a reading or thinking group can be helpful to make it more fun)
This doesn’t guarantee that you’ll be a good fit for AI safety work, but I think this is an amount of effort where it’s possible for a LW mod to look at your work, and figure out if this is likely to be a good use of your time.
Some people may object “this is a lot of work.” Yes, it is. If you’re the right sort of person you may just find this work fun. But the bottom line is yes, this is work. You should not expect to contribute to the field without putting in serious work, and I’m basically happy to filter out of LessWrong people who a) seem to superficially have pretty confused takes, and b) are unwilling to put in 20 hours of research work.