This is one downside to be careful of with outreach, but on net I think it’s quite good to have more high-quality analyses of AI risk. The goal should be to get people to take the problem seriously, not to get people to blindly accept the first safety-related research opportunity they can find.
This is one downside to be careful of with outreach, but on net I think it’s quite good to have more high-quality analyses of AI risk. The goal should be to get people to take the problem seriously, not to get people to blindly accept the first safety-related research opportunity they can find.