The alignment problem (or are you asking what concerns us the most within that scope?)
Yes, what issue concerns you most within the scope of AI alignment? (Edited original q for clarity, thanks)
That’s assuming most people here want this—I don’t think that’s the case
Why do you think most people here would not want greater public awareness around the topic of AI safety? (Removed the assumption from the original q)
That is, that humans eventually create AGI, right?
Indeed! (Edited original q to specify this)
Yes, what issue concerns you most within the scope of AI alignment? (Edited original q for clarity, thanks)
Why do you think most people here would not want greater public awareness around the topic of AI safety? (Removed the assumption from the original q)
Indeed! (Edited original q to specify this)