RSS

ank

Karma: 8

This is the result of 3 years of thinking and modeling hyper‑futuristic and current ethical systems, the link between the two, and modeling the ultimate future. I’ve been working on this almost full-time, and I have some very specific answers to alignment and safety questions. Imagine, we have no physical or computational limitations, what ultimate future will we build in the best-case scenario? If you know where you are going, it’s harder to go astray.

I’m pretty sure I’ve figured it out. Imagine, you met someone from the ultimate future and they started describing it, you’d be overwhelmed and might think they were crazy. It’s a blessing to know what the future might hold and a curse to see that humanity is heading straight toward dystopia. That’s why I decided to write down everything I’ve learned—to know that I did everything I could to stop the dystopias that are on their way. Have a nice day!

Places of Lov­ing Grace [Story]

ankFeb 18, 2025, 11:49 PM
−1 points
0 comments4 min readLW link

Ar­tifi­cial Static Place In­tel­li­gence: Guaran­teed Alignment

ankFeb 15, 2025, 11:08 AM
2 points
2 comments1 min readLW link

Static Place AI Makes AGI Re­dun­dant: Mul­tiver­sal AI Align­ment & Ra­tional Utopia

ankFeb 13, 2025, 10:35 PM
0 points
0 comments11 min readLW link

Ra­tional Utopia, Mul­tiver­sal AI Align­ment, Steer­able ASI, Ul­ti­mate Hu­man Free­dom (V. 3: Mul­tiver­sal Ethics, Place ASI)

ankFeb 11, 2025, 3:21 AM
13 points
7 comments29 min readLW link

How To Prevent a Dystopia

ankJan 29, 2025, 2:16 PM
−3 points
4 comments1 min readLW link

ank’s Shortform

ankJan 21, 2025, 4:55 PM
1 point
2 comments1 min readLW link