Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”
I don’t think that AI safety is important, which I guess makes me one of the “more varied participants made wary by an explicit focus on AI.” Happy you’re being explicit about your goals but I don’t like them.
I don’t think that AI safety is important, which I guess makes me one of the “more varied participants made wary by an explicit focus on AI.” Happy you’re being explicit about your goals but I don’t like them.