What they want: Build human-like AI (in terms of our minds), as opposed to the black-boxy alien AI that we have today.
Why they want it: Then our systems, ideas, intuitions, etc., of how to deal with humans and what kind of behaviors to expect, will hold for such AI, and nothing insane and dangerous will happen. Using that, we can explore and study these systems, (and we have a lot to learn from system at even this level of capability), and then leverage them to solve the harder problems that come with aligning superintelligence.
(Note on this comment: I posted (something like) the above on Discord, and am copying it to here because I think it could be useful. Though I don’t know if this kind of non-interactive comment is okay.)
My summary:
(in case you want to copy-paste and share this)
Article by Conjecture, from february 25th 2023.
Title: `Cognitive Emulation: A Naive AI Safety Proposal`
Link: <https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal>
(Note on this comment: I posted (something like) the above on Discord, and am copying it to here because I think it could be useful. Though I don’t know if this kind of non-interactive comment is okay.)