It’s useful for AGI researchers to notice that there are safety issues, but not useful for them to notice that there are “safety issues” which can be dealt with by following OAI guidelines. The latter kind of understanding might be worse than none at all, as it seemingly resolves the problem. So it’s not clear to me that getting people to “admit that there might be safety issues” is in itself a worthwhile milestone.
It’s useful for AGI researchers to notice that there are safety issues, but not useful for them to notice that there are “safety issues” which can be dealt with by following OAI guidelines. The latter kind of understanding might be worse than none at all, as it seemingly resolves the problem. So it’s not clear to me that getting people to “admit that there might be safety issues” is in itself a worthwhile milestone.