This is actually my primary focus. I believe it can be done through a complicated process that targets human psychology, but to explain it simply
- Spread satisfaction & end suffering. - Spread rational decision-making
To further simplify, if everyone was like us, and no one was on the chopping block if AGI doesn’t get created, then the incentive to create AGI seizes and we effectively secure decades for AI-safety efforts.
This is actually my primary focus. I believe it can be done through a complicated process that targets human psychology, but to explain it simply
- Spread satisfaction & end suffering.
- Spread rational decision-making
To further simplify, if everyone was like us, and no one was on the chopping block if AGI doesn’t get created, then the incentive to create AGI seizes and we effectively secure decades for AI-safety efforts.
This is a post I made on the subject.
https://www.lesswrong.com/posts/GzMteAGbf8h5oWkow/breaking-beliefs-about-saving-the-world