+1 to your point that while the standard for making allies in other research disciplines should be low, the standards for what we fund should be high. But I suspect I might have a somewhat wider vision of an ideal AIS&L community than you do. I don’t think I would characterize many examples AIS-inspired neartermist work that I can think of as hijacking.
To your second point, FWIW, I personally try to ignore trolls as much as possible and do my best to emphasize that there are a lot of near and longtermist reasons to care about almost all AI alignment work.
+1 to your point that while the standard for making allies in other research disciplines should be low, the standards for what we fund should be high. But I suspect I might have a somewhat wider vision of an ideal AIS&L community than you do. I don’t think I would characterize many examples AIS-inspired neartermist work that I can think of as hijacking.
To your second point, FWIW, I personally try to ignore trolls as much as possible and do my best to emphasize that there are a lot of near and longtermist reasons to care about almost all AI alignment work.