This is the result of 3 years of thinking and modeling hyper‑futuristic and current ethical systems, the link between the two, and modeling the ultimate future. I’ve been working on this almost full-time, and I have some very specific answers to alignment and safety questions. Imagine, we have no physical or computational limitations, what ultimate future will we build in the best-case scenario? If you know where you are going, it’s harder to go astray.
I’m pretty sure I’ve figured it out. Imagine, you met someone from the ultimate future and they started describing it, you’d be overwhelmed and might think they were crazy. It’s a blessing to know what the future might hold and a curse to see that humanity is heading straight toward dystopia. That’s why I decided to write down everything I’ve learned—to know that I did everything I could to stop the dystopias that are on their way. Have a nice day!
Yep, fixed it, I wrote more about alignment and it looks like most of my title choosing is over the top :) Will be happy to hear your suggestions, how to improve more of the titles: https://www.lesswrong.com/users/ank