“[X field] is often [dangerous / bad]. If you’re [pro-social / thoughtful / conscientious], please avoid [X field].”
Let’s grant that these points as true. X field is indeed bad or dangerous, people who are some mix of pro-social and thoughtful and conscientious will recognize that, and they are exhorted not to join that field.
What happens next, causally?
(1) If the field has a limited supply of people with relevant background to make contributions in the field, the result will be to slow progress in the field.
(2) If the field is below a “critical mass” of contributors to it, it might fail to make progress entirely.
(3) However, if the field does already have a critical mass of people making contributions, the end result will be adverse selection against pro-sociality, thoughtfulness, and conscientiousness in the field.
I have a profound respect, at a distance, for MIRI and Soares. I think we actually agree about the dangers, and I agree that interpretability research (for instance) is a very important under-resourced problem that would benefit from contributors.
But I also come to one different conclusion — the more thoughtful, pro-social, and conscientious people working at Meta, OpenAI, Google, etc… the more likely things go well. I think there’s a critical mass already in place for continued growth of machine learning capabilities on the path to AGI, and having people who care and are thoughtful in the room participating in their development is a good thing and not a bad thing. I would respectfully advocate the opposite position of Soares, and encourage thoughtful people to consider joining the strongest teams in the field and being a voice of reason in the room while shaping development.
Devil’s Advocate: Adverse Selection Against Conscientiousness
Partially in reply to “Request: stop advancing AI capabilities”.
Consider this argument:
Let’s grant that these points as true. X field is indeed bad or dangerous, people who are some mix of pro-social and thoughtful and conscientious will recognize that, and they are exhorted not to join that field.
What happens next, causally?
(1) If the field has a limited supply of people with relevant background to make contributions in the field, the result will be to slow progress in the field.
(2) If the field is below a “critical mass” of contributors to it, it might fail to make progress entirely.
(3) However, if the field does already have a critical mass of people making contributions, the end result will be adverse selection against pro-sociality, thoughtfulness, and conscientiousness in the field.
I have a profound respect, at a distance, for MIRI and Soares. I think we actually agree about the dangers, and I agree that interpretability research (for instance) is a very important under-resourced problem that would benefit from contributors.
But I also come to one different conclusion — the more thoughtful, pro-social, and conscientious people working at Meta, OpenAI, Google, etc… the more likely things go well. I think there’s a critical mass already in place for continued growth of machine learning capabilities on the path to AGI, and having people who care and are thoughtful in the room participating in their development is a good thing and not a bad thing. I would respectfully advocate the opposite position of Soares, and encourage thoughtful people to consider joining the strongest teams in the field and being a voice of reason in the room while shaping development.