I think it is likely best to push against including that sort of thing in the Overton window of what’s considered AI safety / AI alignment literature.
I’m really sympathetic to these concerns but I’m worried about the possible unintended consequences of trying to do this. There will inevitably be a large group of people working on short and medium term AI safety (due to commercial incentives) and pushing them out of “AI safety / AI alignment literature” risks antagonizing them and creating an adversarial relationship between the two camps, and/or creates a larger incentive for people to stretch the truth about how robust to scale their ideas are. Is this something you considered?
I’m not sure how to think about this. My intuition is that this doesn’t need to be a problem if people in (my notion of) the AI alignment field just do the best work they can do, so as to demonstrate by example what the larger concerns are. In other words, win people over by being sufficiently exciting rather than by being antagonizing/exclusive. I suppose that’s not very consistent with my comment above.
I’m really sympathetic to these concerns but I’m worried about the possible unintended consequences of trying to do this. There will inevitably be a large group of people working on short and medium term AI safety (due to commercial incentives) and pushing them out of “AI safety / AI alignment literature” risks antagonizing them and creating an adversarial relationship between the two camps, and/or creates a larger incentive for people to stretch the truth about how robust to scale their ideas are. Is this something you considered?
I’m not sure how to think about this. My intuition is that this doesn’t need to be a problem if people in (my notion of) the AI alignment field just do the best work they can do, so as to demonstrate by example what the larger concerns are. In other words, win people over by being sufficiently exciting rather than by being antagonizing/exclusive. I suppose that’s not very consistent with my comment above.