[Question] Does human (mis)alignment pose a significant and imminent existential threat?

(This question was born from my comment on a very excellent post, LOVE in a simbox is all you need by @jacob_cannell )

Why am I asking this question?

I am personally very troubled by what I would equate to human misalignment—our deep divisions, our susceptibility to misinformation and manipulation, our inability to identify and act collectively in our best interests. I am further troubled by the deleterious effects that technology has had in that regard already (think social media), and would like to see efforts to not only produce AI that is ethical or aligned itself (which, don’t get me wrong, I LOVE and find very encouraging), but also ensure that AI is being harnessed to offer humans the support they need to realign themselves, which is critical to achieving the ultimate goal of Alignment of the (Humans + AI) Collaboration as a whole.

However, that’s just my current perspective. And while I think I have good reasons for it, I realize I have limitations—both in knowledge and experience, and in my power to effect change. So, I’m curious to hear other perspectives that might help me become more right or just understand other viewpoints, or perhaps connect with others who are like-minded so we can figure out what we might be able to do about it together.

How is this practical?

If others here share my concerns and believe it is a significant threat that warrants action, I will likely have follow-on questions for discussion toward that end. For instance, I’d love to hear if there are efforts already being made that address my concerns. Or perhaps if anyone thinks that creating and deploying aligned AI will naturally help humans overcome those issues, I’d be curious to hear their thoughts. I have some ideas of my own too, but I’ll save those at least until I’ve done a lot more listening and understanding first to establish some mutual understanding and trust.

No comments.