As someone who is quite concerned about the AI Alignment field having had a major negative impact via accelerating AI capabilities, I also agree with this. It’s really quite unlikely for your first pieces of research to make a huge difference. I think the key people who I am worried will drive forward capabilities are people who have been in the field for quite a while and have found traction on the broader AGI problems and questions (as well as people directly aiming towards accelerating capabilities, though the worry there is somewhat different in nature).
As someone who is quite concerned about the AI Alignment field having had a major negative impact via accelerating AI capabilities, I also agree with this. It’s really quite unlikely for your first pieces of research to make a huge difference. I think the key people who I am worried will drive forward capabilities are people who have been in the field for quite a while and have found traction on the broader AGI problems and questions (as well as people directly aiming towards accelerating capabilities, though the worry there is somewhat different in nature).