(Lately, it’s seemed to me that focusing my time on nearer-term / early but post-AGI futures seems better than spending my time discussing ideas like these on the margin, but this may be more of a fact about myself than it is about other people, I’m not sure.)
I endorse Three worlds collide as a fun and insightful read. It states upfront that it does not feature AGI:
This is a story of an impossible outcome, where AI never worked, molecular nanotechnology never worked, biotechnology only sort-of worked; and yet somehow humanity not only survived, but discovered a way to travel Faster-Than-Light: The past’s Future.
Yet, it’s themes are quite relevant for civilization-scale outer alignment.
Some more links from the philosophical side that I’ve found myself returning to a lot:
The fun theory sequence
Three worlds collide
(Lately, it’s seemed to me that focusing my time on nearer-term / early but post-AGI futures seems better than spending my time discussing ideas like these on the margin, but this may be more of a fact about myself than it is about other people, I’m not sure.)
I endorse Three worlds collide as a fun and insightful read. It states upfront that it does not feature AGI:
Yet, it’s themes are quite relevant for civilization-scale outer alignment.